path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
S1/BIMA/TME/TME1/TME1_Kordon_Durand/TME1_Kordon_Durand.ipynb | ###Markdown
Practical work 1: introduction and image enhancement - Quick start for Python (10 minutes!) : https://www.stavros.io/tutorials/python/- Quick start for Numpy : https://numpy.org/devdocs/user/quickstart.html- For Matlab users: Numpy is very similar but with some important difference, see http://mathesaurus.sourceforge.net/matlab-numpy.html.- Keep in mind that in Python, exception of variable of scalar type, all is reference and affectation is not a copy. Short introduction to image processing with PythonHelp: use the function `help()` to get information on a Python objet. Images are stored as arrays that is the default type of the `numpy` module. Defaut type of array elements is `float64` according to the IEEE754 norm. Special float values are defined: infinity (`inf`) and undefined (`nan`, *not a number*), and some numerical constants, such as $\pi$.
###Code
# import numpy
import numpy as np
import matplotlib.pyplot as plt
# predefined constants
print(np.inf,np.nan,np.pi)
# some values
print( 1., 1e10, -1.2e-3)
###Output
inf nan 3.141592653589793
1.0 10000000000.0 -0.0012
###Markdown
Creating an array: several ways.1. From a list of values (formally any Python iterable object). Elements of an array have the same **type**, determined by Numpy:
###Code
V = np.array([1,2,3])
M = np.array([[1,2,3],[4,5,6.]])
print ("V is of type",V.dtype)
print ("M is of type",M.dtype)
###Output
V is of type int32
M is of type float64
###Markdown
2. Without values: Numpy has constructors such as `empty()`, `zeros()`, `ones()`... Shape should be given (see below). Important: `empty()` does not initialize array elements.
###Code
I = np.zeros((3,4))
print(I)
J = np.empty((4,3))
print(J)
###Output
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
###Markdown
3. From a sequence, prefer `arange()` from numpy to `range()` from python.
###Code
print(np.arange(10))
print(np.arange(0,10,2))
print(np.arange(9,-1,-.5))
###Output
[0 1 2 3 4 5 6 7 8 9]
[0 2 4 6 8]
[ 9. 8.5 8. 7.5 7. 6.5 6. 5.5 5. 4.5 4. 3.5 3. 2.5
2. 1.5 1. 0.5 0. -0.5]
###Markdown
Shape of an arrayShape decribes the number of elements for each dimension. A vector is of dimension 1, a matrix is of dimension 2. Superior dimensions are possible. Shape is not size that is the number of elements of an array. Type of shape is always a tuple of integers. With previous example:
###Code
print(I.shape, I.size)
print(J.shape, J.size)
print(V.shape, V.size)
###Output
(3, 4) 12
(4, 3) 12
(3,) 3
###Markdown
An important function/method is `reshape()` to change the shape of an array. Typical usage of `reshape()` is to transform a vector into a matrix or reciprocally.
###Code
K = np.arange(12).reshape((3,4))
print(K)
print(np.reshape(K,(12)))
print(K.reshape((2,2,3)))
###Output
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[ 0 1 2 3 4 5 6 7 8 9 10 11]
[[[ 0 1 2]
[ 3 4 5]]
[[ 6 7 8]
[ 9 10 11]]]
###Markdown
Elements of an arrayAccess element by indices: two syntaxe are possible, the first given in the example is prefered. Negative index is possible with the same meanning of Python list.
###Code
I = np.arange(12).reshape((3,4))
print(I[1,2])
print(I[0][0])
print(I[-1,0])
###Output
6
0
8
###Markdown
Access by group of indices using the operator `:` allows to extract subarray. General syntaxe is `start:end:step` and it is very powerfull:
###Code
print('extract the first line')
print(I[0,:])
print(I[0,0:])
print(I[0,::])
print(I[0,::1])
print('extract center of the array')
print(I[1:3,1:3])
print('extract elements with even indices')
print(I[::2,::2])
print('print the horizontal mirror of an array')
print(I[:,::-1])
###Output
extract the first line
[0 1 2 3]
[0 1 2 3]
[0 1 2 3]
[0 1 2 3]
extract center of the array
[[ 5 6]
[ 9 10]]
extract elements with even indices
[[ 0 2]
[ 8 10]]
print the horizontal mirror of an array
[[ 3 2 1 0]
[ 7 6 5 4]
[11 10 9 8]]
###Markdown
Array arithmeticOperators and functions can be applied to arrays. Mostly, operations are element-wise (i.e. applied element by element). The consequence is arrays should have the same shape. One operand can also be scalar in most of time.
###Code
A = np.arange(12).reshape((3,4))
B = 2 * A + 1
C = A + B
D = np.cos(2*np.pi*A/12)
print (D)
print (D**2)
print (D>0)
###Output
[[ 1.00000000e+00 8.66025404e-01 5.00000000e-01 6.12323400e-17]
[-5.00000000e-01 -8.66025404e-01 -1.00000000e+00 -8.66025404e-01]
[-5.00000000e-01 -1.83697020e-16 5.00000000e-01 8.66025404e-01]]
[[1.00000000e+00 7.50000000e-01 2.50000000e-01 3.74939946e-33]
[2.50000000e-01 7.50000000e-01 1.00000000e+00 7.50000000e-01]
[2.50000000e-01 3.37445951e-32 2.50000000e-01 7.50000000e-01]]
[[ True True True True]
[False False False False]
[False False True True]]
###Markdown
Array may be viewed as matrix, we can make some linear algebraic manipulation. For example, `np.matmul()` is the matrix multiplication. It can be used to build matrix from vector. An example, using the transpose operator `T`.
###Code
L = np.arange(1,6).reshape((1,5))
# transpose of L. Warning: C remains a reference to L
C = L.T
# This could be better if your want to touch L
C = L.T.copy()
print("A 5*5 matrix:")
print(np.matmul(C,L))
print("A dot product, but result is a matrix:")
print(np.matmul(L,C))
print(np.matmul(L,C)[0,0])
print("dot() is prefered with vectors:")
V = np.arange(1,6)
print(V.dot(V))
print(np.dot(V,V))
###Output
A 5*5 matrix:
[[ 1 2 3 4 5]
[ 2 4 6 8 10]
[ 3 6 9 12 15]
[ 4 8 12 16 20]
[ 5 10 15 20 25]]
A dot product, but result is a matrix:
[[55]]
55
dot() is prefered with vectors:
55
55
###Markdown
ImagesWe make use of PIL module (https://pillow.readthedocs.io/en/stable/reference/Image.html) to load and write an image and easily converted to Numpy array. Be careful: array type depends on image.
###Code
from PIL import Image
# reading an image and convert to array
myimage = np.array(Image.open('img/moon.png'))
# write an image (alternative format) from an array
Image.fromarray(myimage).save('image.jpg')
###Output
_____no_output_____
###Markdown
Array can be displayed as an image using Matplotlib module. Here a short example:
###Code
import matplotlib.pyplot as plt
# minimal example:
plt.imshow(myimage)
plt.show()
# with more controls:
w,h=400,400
plt.figure(figsize=(w/80,h/80)) # optional, to control the size of figure (unit: pixel)
plt.gray() # optional call to display image using a gray colormap
plt.title('This is an image') # optional: add a title
plt.axis('off') # optional: remove axes
plt.imshow(myimage)
plt.show()
###Output
_____no_output_____
###Markdown
See also:- https://matplotlib.org/3.1.1/tutorials/introductory/images.html- https://matplotlib.org/gallery/images_contours_and_fields/image_demo.htmlsphx-glr-gallery-images-contours-and-fields-image-demo-py). Exercice 1In this exercice, we work with image `img/moon.png`. If possible give two solutions : one with loops (for, while, ...) and one without loops. 1. Write and test a function `openImage()` getting an image filename as argument and returning the array of pixel values.
###Code
from PIL import Image
import numpy as np
def openImage(fname):
myimage = np.array(Image.open(fname))
return myimage
""" str -> Array
(notation above means the function gets a string argument and returns an Array object)
"""
def openImage2(fname):
myimage = np.array(Image.open(fname))
vecteur=np.array([])
for i in myimage:
vecteur=np.append(vecteur,i)
return vecteur
""" str -> vector
"""
d=openImage("img/moon.png")
print(d.shape)
###Output
(537, 358)
###Markdown
2. Write and test a function `countPixels()` getting an array and an integer `k` as arguments and returning the number of pixels having the value `k`.
###Code
def countPixels(I,k):
count=np.unique(I)
for v in count:
if(v==k):
return np.sum(np.where(I==v,1,0))
return 0
v=countPixels(d,1)
print(v)
###Output
20126
###Markdown
3. Write and test a function `replacePixels()` getting an array and two intergers and replacing pixels having `k1`value to `k2` value and returning the new array. Be aware to not modify `I`.
###Code
def replacePixels(I,k1,k2):
return np.where(I==k1,k2,I)
a=replacePixels(d,3,7)
print(a)
print(d)
###Output
[[ 1 7 7 ... 8 16 8]
[ 7 7 7 ... 4 11 12]
[ 6 4 6 ... 7 2 7]
...
[ 4 8 8 ... 6 4 8]
[ 4 8 8 ... 4 6 6]
[ 2 7 7 ... 6 9 9]]
[[ 1 3 7 ... 8 16 8]
[ 3 7 3 ... 4 11 12]
[ 6 4 6 ... 7 2 3]
...
[ 4 8 8 ... 6 4 8]
[ 4 8 8 ... 4 6 6]
[ 2 3 3 ... 6 9 9]]
###Markdown
4. Write and test a function `normalizeImage()` getting an array and two integers `k1` and `k2` and returning an array with elements normalized to the interval $[k_1,k_2]$.
###Code
def normalizeImage(I,k1,k2):
norm = (I - np.min(I)) / (np.max(I) - np.min(I))*(k2-k1)+k1
norm2 = norm.astype(int)
return norm2
###Output
_____no_output_____
###Markdown
5. Write and test a function `inverteImage()` getting an array and returning and arry having inverted pixel values (i.e. the transform $k \mapsto 255-k$
###Code
def inverteImage(I):
for i in range(len(I)):
for v in range(len(I[i])):
I[i][v]=255-I[i][v]
inverteImage(d)
print(d)
###Output
[[254 252 248 ... 247 239 247]
[252 248 252 ... 251 244 243]
[249 251 249 ... 248 253 252]
...
[251 247 247 ... 249 251 247]
[251 247 247 ... 251 249 249]
[253 252 252 ... 249 246 246]]
###Markdown
6. Write and test a function `computeHistogram()` getting an array and returning its histogram. Type of histogram can be an array or a list. It is forbidden to use an histogram method from a Python module. Is it possible to compute the histogram without explicitely visiting array pixels?
###Code
def computeHistogram(I):
c=0
hist=np.array([])
f=np.max(I)
for i in range(f):
c=countPixels(I,i)
hist=np.append(hist,c)
#print(hist.shape)
fd=np.arange(0,f,1)
plt.bar(fd,hist,width=10)
#print(hist)
#print(np.unique(I))
return hist
w=openImage("img/moon.png")
x=computeHistogram(w)
# use comments to answer to a verbal question
# on peut grâce a Numpy ne pas visiter tout les pixels grâce à la methode np.where de la fonction CountPixels
###Output
_____no_output_____
###Markdown
7. Write and test a function `thresholdImage()` getting an array `I` and an integer `s` and returning an array having elements set to 0 if corresponding element of `I` is lower than `s` or 255 otherwise.
###Code
def thresholdImage(I,s):
return np.where((I<s),0,255)
""" Array*int -> Array """
s=thresholdImage(w,160)
plt.imshow(s)
###Output
_____no_output_____
###Markdown
8. Using previous functions, give a series of instructions to read then to display an image, plot the histogram (one can use `plot()` or `bar()` from `matplotlib.pyplot` module), inverse the image and display it, plot its histogram.
###Code
import matplotlib.pyplot as plt
# j'affiche deja l'histogramme dans la fonction computeHistogram
###Output
_____no_output_____
###Markdown
9. Give a series of instructions to read and display an image, plot the histogram, normalize the image to the interval $[10,50]$, compute the new histogram, display the image and the histogram. Remark: `imshow()` normalizes image. To avoid this and see the effect of the normalization, use `imshow()` with parameters `vmin=0,vmax=255`. Comment the results.
###Code
w=openImage("img/moon.png")
plt.imshow(w)
computeHistogram(w)
norme=normalizeImage(d,10,50)
print(norme)
#print(norme.shape)
x=computeHistogram(norme)
plt.imshow(norme,vmin=10,vmax=50)
###Output
_____no_output_____
###Markdown
10. Same question than 9. remplacing the normalization by a thresholding with parameter $s=127$.
###Code
w=openImage("img/moon.png")
s=thresholdImage(w,127)
plt.imshow(s)
x=computeHistogram(s)
###Output
_____no_output_____
###Markdown
Exercice 2 - generate images1. Create the array `I` 4 by 4 corresponding to the following image: Black pixels have value 0, white pixels value 255, and grey pixels value 127. Display the image using `imshow()` and plot the histogram.
###Code
tab = np.array([[127, 127, 0, 255],
[127, 0, 0, 255],
[0 , 127, 0, 255],
[127, 127, 0, 255]])
plt.imshow(tab)
x=computeHistogram(tab)
###Output
_____no_output_____
###Markdown
2. We want to generate a matrix having random values. Functions `rand()` and `randn()` from `numpy.matlib` module generate array of given shape with random values following respectively a uniform distribution on $[0,1[$ and a normal distribution. Create an array of shape 512 by 512 having **integer** elements following an uniform distribution in the set $\{0,1,\cdots,255\}$ . We also want to create an array following a gaussian distribution with a mean of 128 and a standard deviation of 16 and with **integer** values. Display the images and their histogramms. Discuss the results.
###Code
import numpy.matlib
def createarrayuniform(i,j):
w=np.random.rand(i,j)*255
return w.astype(int)
a=createarrayuniform(512,512)
plt.imshow(a)
x=computeHistogram(a)
#dans cette image, les couleurs sont chosis purement au hasard, on a donc aucunes couleurs prédominante et une distribution uniforme
import math as mht
def createarraygaussian(means,ecartype,i,j):
a=mht.sqrt(ecartype) * np.random.randn(i,j) +means
return a.astype(int)
v=createarraygaussian(128,16,512,512)
plt.imshow(v)
x=computeHistogram(v)
# Sur cette iamge, on voit qu'une couleur domine , celle qui est située au niveau de la moyenne 128 et elle est repartie selon une loi normale, cette loi se reconnait sur l'histogramme de l'image
###Output
_____no_output_____
###Markdown
Exercice 3: image manipulationIn this exercice, we work with image `img/pout.png`. 1. Read and display this image
###Code
z=openImage("img/pout.png")
plt.imshow(z)
###Output
_____no_output_____
###Markdown
2. Examine the histogram. Determine the extrema of the image. What can you say about the quality of this image?
###Code
s=computeHistogram(z)
s.shape
# On a peu de nuances de gris donc une qualité faible
###Output
_____no_output_____
###Markdown
3. Using functions from Exercice 1, write the function `histogramEqualization()` getting one image, its histogram, applying an histogram equalization and returning the new image. Test this function on `pout.png` and discuss the result.
###Code
def histogramEqualization(I,h):
(n,m)=I.shape
Ib=np.copy(I)
L=np.amax(I)
Hc=np.zeros(L)
sum=0
#création de l'histogramme cumulé
for i in range(0,L):
sum+=h[i]
Hc[i]=sum
#egalisation de l'histogramme
for i in range (0,n):
for j in range (0,m):
Ib[i,j]=(((L-1)/(n*m))*Hc[Ib[i,j]-1])
return Ib
mm=histogramEqualization(z,s)
plt.imshow(mm)
#L'Egalisation de l'histogramme à permis de mettre en evidence les contrastes et a donc permis d'augmenter la visibilité global de la photo
###Output
_____no_output_____ |
Pymaceuticals/.ipynb_checkpoints/pymaceuticals_starter-checkpoint.ipynb | ###Markdown
Observations and Insights Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights Look across all previously generated figures and tables and write at least three observations or inferences that can be made from the data. Include these observations at the top of notebook. * There is a mouse (g989) that has it´s timepoint repeated or duplicated, if we make the necessary operations, we can obtain the real number of mice, that is 248.* We could see that the number of mice per each Drug Regimen is 25, except for Propiva and Stelasyn that have 24. Also we could see that the male gender is the one that predominates in the whole experiment instead of the female gender.* Obtaining the IQR we can determine that the only drug that have a potential outlier is Infubinol.* Finally we can see with the two last charts the mice values in weight for Capamulin regimen are proportionals with the tumor volume. On the other hand, the "s185" mouse graph, shows us that the tumor volume decreases as the timepoint progresses.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
#print(mouse_metadata, study_results)
# Combine the data into a single dataset
combined_data = pd.merge(mouse_metadata, study_results, how = 'inner', on = 'Mouse ID')
# Display the data table for preview
combined_data
# Checking the number of mice.
number_mice = len(combined_data['Mouse ID'].value_counts())
number_mice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mice = combined_data.loc[combined_data.duplicated(subset = ['Mouse ID', 'Timepoint',]),'Mouse ID'].unique()
duplicate_mice
# Optional: Get all the data for the duplicate mouse ID.
mice_g989 = combined_data[combined_data['Mouse ID'] == 'g989']
mice_g989
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
cleaned_dataframe = combined_data[combined_data['Mouse ID'] != 'g989']
cleaned_dataframe
# Checking the number of mice in the clean DataFrame.
number_mice2 = len(cleaned_dataframe['Mouse ID'].value_counts())
number_mice2
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
mean = cleaned_dataframe.groupby("Drug Regimen")['Tumor Volume (mm3)'].mean()
print(mean)
median = cleaned_dataframe.groupby("Drug Regimen")['Tumor Volume (mm3)'].median()
print(median)
variance = cleaned_dataframe.groupby("Drug Regimen")['Tumor Volume (mm3)'].var()
print(variance)
standard_dev = cleaned_dataframe.groupby("Drug Regimen")['Tumor Volume (mm3)'].std()
print(standard_dev)
SEM = cleaned_dataframe.groupby("Drug Regimen")['Tumor Volume (mm3)'].sem()
print(SEM)
# This method is the most straighforward, creating multiple series and putting them all together at the end.
summary_data = pd.DataFrame({"Mean": mean, "Median": median, "Variance": variance, "Standard deviation": standard_dev, "SEM":SEM})
summary_data
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
second_method = cleaned_dataframe.groupby('Drug Regimen').agg(
Mean = ('Tumor Volume (mm3)', 'mean'),
Median = ('Tumor Volume (mm3)', 'median'),
Variance = ('Tumor Volume (mm3)', 'var'),
Standard_Deviation = ('Tumor Volume (mm3)', 'std'),
SEM = ('Tumor Volume (mm3)', 'sem'))
print(second_method)
###Output
Mean Median Variance Standard_Deviation SEM
Drug Regimen
Capomulin 40.675741 41.557809 24.947764 4.994774 0.329346
Ceftamin 52.591172 51.776157 39.290177 6.268188 0.469821
Infubinol 52.884795 51.820584 43.128684 6.567243 0.492236
Ketapril 55.235638 53.698743 68.553577 8.279709 0.603860
Naftisol 54.331565 52.509285 66.173479 8.134708 0.596466
Placebo 54.033581 52.288934 61.168083 7.821003 0.581331
Propriva 52.320930 50.446266 43.852013 6.622085 0.544332
Ramicane 40.216745 40.673236 23.486704 4.846308 0.320955
Stelasyn 54.233149 52.431737 59.450562 7.710419 0.573111
Zoniferol 53.236507 51.818479 48.533355 6.966589 0.516398
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
total_mice = cleaned_dataframe.groupby("Drug Regimen")['Mouse ID'].nunique()
#print(total_mice)
total_mice.plot(kind='bar', facecolor ='green', figsize=(10,3), width=0.8,label = 'Count')
# Set x and y limits
x_axis = np.arange(len(total_mice))
plt.xlim(-1, len(x_axis))
plt.ylim(15, max(total_mice)+2)
# Set a Title and labels
plt.legend()
plt.xlabel('Drug Regimen')
plt.ylabel('Total Number of Mice')
plt.title('Total Number of Mice for each Treatment')
plt.show()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
number = total_mice.values
#print(number)
# Set x axis and tick locations
x_axis = np.arange(len(number))
drug_regimen = total_mice.index
tick_locations = [value for value in x_axis]
# Create a list indicating where to write x labels and set figure size to adjust for space
plt.figure(figsize=(10,3))
plt.bar(x_axis, number, color='green', alpha=1, align="center", label = 'Count')
plt.xticks(tick_locations, drug_regimen, rotation="vertical")
# Set x and y limits
plt.xlim(-1, len(x_axis))
plt.ylim(15, max(number)+2)
# Set a Title and labels
plt.legend()
plt.title("Total Number of Mice for each Treatment")
plt.xlabel("Drug Regimen")
plt.ylabel("Total Number of Mice")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender = cleaned_dataframe.groupby("Sex")['Mouse ID'].nunique()
print(gender)
gender.plot(kind='pie', autopct='%1.2f%%', explode=[0.1,0], colors=['purple','blue'], shadow=True, startangle=120, legend=True)
plt.ylabel ('')
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
gender2 = gender
#print(gender2)
labels = gender.index
plt.pie(gender2, autopct='%1.2f%%',labels=labels, explode=[0.1,0], colors=['purple','blue'], shadow=True, startangle=120)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
final_volume = cleaned_dataframe.groupby(['Mouse ID', 'Drug Regimen']).agg(Timepoint = ('Timepoint', 'max'))
final_volume2 = final_volume.merge(cleaned_dataframe, how = 'inner', on = ['Mouse ID', 'Timepoint'])
final_volume2
# Put treatments into a list for for loop (and later for plot labels)
treatment = ['Capomulin', 'Ramicane', 'Infubinol','Ceftamin']
#treatment
# Create empty list to fill with tumor vol data (for plotting)
empty_list = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for value in treatment:
# Locate the rows which contain mice on each drug and get the tumor volumes
tumor_volume = final_volume2['Tumor Volume (mm3)'].loc[final_volume2['Drug Regimen'] == value]
#print(tumor_volume)
# add subset
empty_list.append(tumor_volume)
# Determine outliers using upper and lower bounds
quartiles = tumor_volume.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"For {value} the interquartile range is: {iqr}")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"For {value} values below {lower_bound} could be outliers.")
print(f"For {value} values above {upper_bound} could be outliers.\n")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
red_diamond = dict(markerfacecolor='r', marker='D')
fig, ax = plt.subplots()
ax.set_title('Final Tumor Volume of Each Mouse Across 4 Drug Regimens')
ax.set_ylabel('Volume')
ax.boxplot(empty_list, flierprops=red_diamond)
ax.set_xticklabels(treatment)
ax.set_xlabel('Drug Regimen')
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
lineplot = cleaned_dataframe.loc[(cleaned_dataframe['Mouse ID'] == 's185')]
lineplot.plot(x='Timepoint',y='Tumor Volume (mm3)', color="gray")
plt.title("Timepoint vs Tumor Volume for a Mouse Treated with Capomulin")
plt.ylabel('Tumor Volume (mm3)')
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin = cleaned_dataframe.loc[cleaned_dataframe['Drug Regimen'] == 'Capomulin']
avg_capomulin = capomulin.groupby(['Weight (g)']).mean()
avg_capomulin = avg_capomulin.reset_index()
#avg_capomulin
plt.scatter(avg_capomulin['Weight (g)'],avg_capomulin['Tumor Volume (mm3)'])
plt.title("Mouse Weight vs Average Tumor Volume for the Capomulin Regimen")
plt.ylabel('Tumor Volume (mm3)')
plt.xlabel('Weight (g)')
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
x_values = avg_capomulin['Weight (g)']
y_values = avg_capomulin['Tumor Volume (mm3)']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(15,44),fontsize=15,color="red")
plt.xlabel('Weight')
plt.ylabel('Tummor Volume')
plt.title('Mouse Weight & Average Tumor Volume for the Capomulin Regimen')
plt.show()
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
merged_data = pd.DataFrame({mouse_metadata})
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
--- **Data Visualization using Matplotlib - Pymaceuticals** Submitted by : Sheetal Bongale | UT Data Analysis and Visualization *This Jupyter notebook will analyze and visualize data of a clinical trial data to test given drugs and their possible effects on tumor volume, metastasis and survival rate.*--- Note: Please find the written analysis of this data at the bottom of this jupyter notebook.
###Code
# Dependencies and Setup
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import os
# Hide warning messages in notebook
import warnings
warnings.filterwarnings('ignore')
# File to Load (Remember to Change These)
mouse_drug_data_to_load = "data/mouse_drug_data.csv"
clinical_trial_data_to_load = "data/clinicaltrial_data.csv"
# Read the Mouse and Drug Data and the Clinical Trial Data
mouse_drug_df = pd.read_csv(mouse_drug_data_to_load)
clinical_trial_df = pd.read_csv(clinical_trial_data_to_load)
# Combine the data into a single dataset
df = mouse_drug_df.merge(clinical_trial_df, on="Mouse ID")
# Display the data table for preview
df.head()
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
mean_tumor_df = df[["Drug", "Timepoint", "Tumor Volume (mm3)"]].groupby(["Drug", "Timepoint"]).mean()
# Preview DataFrame
mean_tumor_df.reset_index(inplace=True)
mean_tumor_df
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
stderror_tumor_df= df[["Drug", "Timepoint", "Tumor Volume (mm3)"]].groupby([ "Drug", "Timepoint"]).sem()
# Preview DataFrame
stderror_tumor_df.reset_index(inplace=True)
stderror_tumor_df.head()
# Minor Data Munging to Re-Format the Data Frames
mean_tumor = mean_tumor_df.pivot(index="Timepoint", columns="Drug", values="Tumor Volume (mm3)")
# Preview that Reformatting worked
mean_tumor
# Same for Std. Error Data Frame
stderror_tumor = stderror_tumor_df.pivot(index="Timepoint", columns="Drug", values="Tumor Volume (mm3)")
# Preview that Reformatting worked
stderror_tumor
# Generate the Plot (with Error Bars)
markers = ["o", "^", "s", "d", "p", "s", "o", "^", "h", "v"]
plt.figure(figsize=(12,9))
for drug, marker in zip(mean_tumor.columns, markers):
tumor_plt = plt.errorbar(x = mean_tumor.index,
y = mean_tumor[drug],
yerr = stderror_tumor[drug],
marker = marker,
linestyle = '--',
linewidth = 1.5)
# Show the Figure
plt.title("Tumor Response to Treatment", fontsize="x-large")
plt.xlabel("Time (Days)", fontsize="x-large")
plt.ylabel("Tumor Volume (mm3)", fontsize="x-large")
plt.grid()
plt.legend(labels = mean_tumor.columns, fontsize="large")
plt.savefig("plots/Tumor_Response.png")
plt.show()
###Output
_____no_output_____
###Markdown
Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
mean_metastatic_df = df[["Drug", "Timepoint", "Metastatic Sites"]].groupby(["Drug", "Timepoint"]).mean()
# Preview DataFrame
mean_metastatic_df.reset_index(inplace=True)
mean_metastatic_df.head()
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
stderror_met_df= df[["Drug", "Timepoint", "Metastatic Sites"]].groupby(["Drug", "Timepoint"]).sem()
# Preview DataFrame
stderror_met_df.reset_index(inplace=True)
stderror_met_df.head()
# Minor Data Munging to Re-Format the Data Frames
mean_metastatic = mean_metastatic_df.pivot(index="Timepoint", columns="Drug", values="Metastatic Sites")
# Preview that Reformatting worked
mean_metastatic.head()
# Same for Std. Error Data Frame
stderror_met = stderror_met_df.pivot(index="Timepoint", columns="Drug", values="Metastatic Sites")
# Preview that Reformatting worked
stderror_met.head()
# Generate the Plot (with Error Bars)
markers = ["o", "^", "s", "d", "p", "s", "o", "^", "h", "v"]
plt.figure(figsize=(12,9))
for drug, marker in zip(mean_metastatic.columns, markers):
met_plt = plt.errorbar(x = mean_metastatic.index,
y = mean_metastatic[drug],
yerr = stderror_met[drug],
marker = marker,
linestyle = '--',
linewidth = 1.5)
# Show the Figure
plt.title("Metastatic Spread During Treatment", fontsize="x-large")
plt.xlabel("Treatment Duration (Days)", fontsize="x-large")
plt.ylabel("Met. Sites", fontsize="x-large")
plt.legend(labels = mean_metastatic.columns, fontsize="large")
plt.grid()
plt.savefig("plots/Metastatic_Spread.png")
plt.show()
###Output
_____no_output_____
###Markdown
Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
mouse_count_df = df[["Drug", "Timepoint", "Mouse ID"]].groupby(["Drug", "Timepoint"]).count()
# Preview DataFrame
mouse_count_df = mouse_count_df.rename(columns={'Mouse ID': "Mouse Count"})
mouse_count_df.reset_index(inplace=True)
mouse_count_df.head()
# Minor Data Munging to Re-Format the Data Frames
mouse_count = mouse_count_df.pivot(index="Timepoint", columns="Drug", values="Mouse Count")
# Preview the Data Frame
mouse_count.head()
# Generate the Plot (Accounting for percentages)
markers = ["o", "^", "s", "d", "p", "s", "o", "^", "h", "v"]
mouse_survival_percent = mouse_count.div(mouse_count.loc[0], axis='columns')*100
plt.figure(figsize=(12,9))
for drug, marker in zip (mouse_survival_percent.columns, markers):
mouse_plt = plt.plot(mouse_survival_percent.index,
mouse_survival_percent[drug],
marker = marker,
linestyle = '--',
linewidth = 1.5)
# Show the Figure
plt.title("Survival During Treatment", fontsize="x-large")
plt.xlabel("Time (Days)", fontsize="x-large")
plt.ylabel("Survival Rate (%)", fontsize="x-large")
plt.legend(labels = mouse_survival_percent.columns, loc=3, fontsize="large")
plt.grid()
plt.savefig("plots/Survival_Rates.png")
plt.show()
# Display the percentage numbers of Survival rates
mouse_survival_percent
###Output
_____no_output_____
###Markdown
Summary Bar Graph
###Code
# Calculate the percent changes for each drug
percent_change_df = pd.Series()
first_index = mean_tumor.index[0]
last_index = mean_tumor.index[-1]
for val in mean_tumor.columns:
percent_change_df[val] = (mean_tumor[val][last_index] - mean_tumor[val][first_index]) * 100 / mean_tumor[val][first_index]
percent_change_df
# Splice the data between passing and failing drugs
color = ["red" if value > 0 else "green" for value in percent_change_df]
# Render Glyphs
plt.rcParams["figure.figsize"] = (12,9)
fig, ax = plt.subplots()
ax.bar(mean_tumor.columns, percent_change_df, color=color)
# Use functions to label the percentages of changes
for i, v in enumerate(percent_change_df):
ax.text(i-0.2, np.sign(v)*3, str(int(v))+"%", color='black', fontsize=12)
# Orient widths. Add labels, tick marks, etc
plt.title("Change in Tumor Volume Over 45 days Treatment", fontsize="x-large")
plt.xlabel("Drug Names", fontsize="x-large", labelpad=15)
plt.ylabel("% Tumor Volume Change", fontsize="x-large")
plt.grid()
# Refine the plot for better visualization
plt.yticks(np.arange(-40,80,step=20))
plt.ylim(-30, max(percent_change_df) + 15)
ax.axhline(y=0, color='dimgrey')
plt.savefig("plots/Percent_Tumor_Change.png")
plt.show()
###Output
_____no_output_____
###Markdown
--- Pymaceuticals Drug Study Analysis Report---Following were the trends observed after the analysis:- ***Tumor Response to Treatment:*** 1. *Capomulin* and *Ramicane* were the only two drugs that decreased the tumor growth. 2. *Capomulin* changed Tumor size from **45mm3 to 36.24mm3** but *Ramicane* changed the tumor size even better from **45mm3 to 34.95mm3.** 3. Almost all other drug treatments showed a similar increase in the rate of tumor size with *Ketapril* having the largest tumor growth (**45mm3 to 70.66mm3**).- ***Metastatic Spread During Treatment:*** 1. Mice treated with *Capomulin* and *Ramicane* show less metastasis spreading of cancer than other drugs. 2. *Ketapril* and *Placebo* developed the most metastatic spread and increasing rate of cancerous sites.- ***Mouse Survival Rate During Treatment:*** 1. It is seen that **84%** and **80%** mice under *Capomulin* and *Ramicane* respectively survived the 45 days treatment. 2. The Drug *Propiva* had the worst survival rate followed by *Infubinol* (**26.92%** and **36%** respectively).- ***Summary: Change in Tumor Volume Over 45 days Treatment:***We can conclude, that the only two drugs that reduced the tumor volume size were *Capomulin* (reduction by **%19.48**) and *Ramicane* (reduction by **%22.32**). --- **Data Visualization Using Bokeh - Pymaceuticals**---Attempting to repeat the data visualization with Bokeh. Just to give it a try! Summary Bar Graph
###Code
# Bokeh basic imports
from bokeh.plotting import figure, show, output_file
from bokeh.io import show, output_notebook
# Convert dataframe to a list
drugs=mean_tumor_df["Drug"].drop_duplicates().tolist()
#drugs = list(set(drugs)) #This was to avoid duplicate
percents= percent_change_df.tolist()
# Color format
color = ["red" if value > 0 else "green" for value in percents]
# Create figure with labels
p = figure(x_range = drugs,
plot_width = 800,
plot_height = 600,
title = 'Change in Tumor Volume Over 45 days Treatment',
x_axis_label = 'Drug',
y_axis_label = '% Tumor Volume Change')
# Render glyph
p.vbar(x= drugs, top = percents, color=color, width = 1)
# Show the plot
output_file("bokeh_outputs/percent.html")
show(p)
###Output
_____no_output_____
###Markdown
Observations and Insights Based on the analysis conducted below, we can reach the following observations: 1. Based on the summary analysis of the tumor growth for all the mice in each drug regimen, the following four drugs appear to be the most promising in decreasing or minizing the increase of tumor growth: Capomulin, Ramicane, Propriva, Ceftamin. The first two regimens show a decrease in tumor growth and the last two have the smallest growth compared to the other drug regimens.2. There appears to be a strong correlation between a mouse's weight and tumor size when looking at the Capomulin drug regimen data.3. Based on the summary data of all drug regimens, it appears as though the drug Ketapril led to worse outcomes than a mouse who was given a placebo. It appears as though there was a slightly larger increase in the tumor volume when compared to mice in the placebo group but further analysis is needed here. Code
###Code
%matplotlib inline
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as stats
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_mice_df = pd.merge(study_results, mouse_metadata, how="outer", on="Mouse ID")
mice_sorted_df = combined_mice_df.sort_values(by=["Mouse ID", "Timepoint"])
mice_sorted_df
# Checking the number of mice in the DataFrame.
number_of_mice = len(mice_sorted_df["Mouse ID"].unique())
number_of_mice
###Output
_____no_output_____
###Markdown
Assumption: It is more valuable to de-duplicate the size of the tumor in the last timepoint for each mouse because the size must have been impacted by the drug regimen.
###Code
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
de_duped_mice_df = mice_sorted_df.drop_duplicates("Mouse ID", "last")
de_duped_mice_df
# adds new column showcasing the growth or decrease in tumor size from the first measurement of 45 mm3
de_duped_mice_df["Tumor Growth"] = de_duped_mice_df["Tumor Volume (mm3)"] - 45.0
de_duped_mice_df
mice_sorted_df["Tumor Growth"] = de_duped_mice_df["Tumor Growth"]
# Checking the number of mice in the clean DataFrame.
assert (de_duped_mice_df["Mouse ID"].count()) == number_of_mice
mice_sorted_df["Drug Regimen"].unique()
###Output
_____no_output_____
###Markdown
Summary StatisticsGenerate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
###Code
# find mean of tumor volume grouped by drug regimen and creating series for tumor volume
drug_regimen_group = mice_sorted_df.groupby(by="Drug Regimen")
tumor_series_group = drug_regimen_group["Tumor Growth"]
tumor_mean = tumor_series_group.mean()
tumor_median = tumor_series_group.median()
tumor_std = tumor_series_group.std()
tumor_variance = tumor_series_group.var()
tumor_sem = tumor_series_group.sem()
# creating summary table
summary_df = pd.DataFrame(data={"Mean":tumor_mean, "Median":tumor_median, "Standard Deviation":tumor_std, "Variance":tumor_variance, "SEM":tumor_sem})
summary_df
###Output
_____no_output_____
###Markdown
Bar Plots
###Code
mice_sorted_df.columns
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
#finding the unique timepoints:
timepoint_labels = mice_sorted_df["Timepoint"].unique().tolist()
number_of_mice_per_timepoint = mice_sorted_df["Timepoint"].value_counts().tolist()
mice_per_timepoint_df = pd.DataFrame(mice_sorted_df["Timepoint"].value_counts())
# tick_locations = [value for value in timepoint_labels]
#Plotting using pandas
mice_per_timepoint_df.plot(kind="bar", title="Number of Mice per Timepoint", xlabel="Timepoint", ylabel="Number of Mice", rot=0)
plt.savefig("../Images/MiceTimepointBar_Pandas.png")
plt.show()
#Plotting using pyplot
plt.bar(timepoint_labels, number_of_mice_per_timepoint, color="r", align="center", tick_label=timepoint_labels)
# titles and axis labels
plt.title("Number of Mice per Timepoint")
plt.xlabel("Timepoint")
plt.ylabel("Number of Mice")
plt.savefig("../Images/MiceTimepointBar_Pyplot.png")
plt.show()
###Output
_____no_output_____
###Markdown
Pie Plots
###Code
mice_sorted_df.columns
mice_sex_distribution_series = mice_sorted_df["Sex"].value_counts()
mice_sex_distribution_list = mice_sex_distribution_series.tolist()
# Generate a pie plot showing the distribution of female versus male mice using pandas
mice_sex_distribution_series.plot(kind="pie", title="Mice Sex Distribution", legend=True, table=True, ylabel="")
plt.savefig("../Images/MiceSexDistribution_Pandas.png")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(
x=mice_sex_distribution_list,
labels=["Male", "Female"],
colors=["Green", "Purple"],
shadow=5,
startangle=90,
radius=2
)
plt.title("Mice Sex Distribution")
plt.axis("equal")
plt.savefig("../Images/MiceSexDistribution_Pyplot.png")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots Using summary_df we identified that the four most promising treatment regimens are: 1. Capomulin2. Ramicane3. Propriva4. CeftaminThe first two regimens show a decrease in tumor growth and the last two have the smallest growth compared to the other drug regimens.
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens.
final_tumor_volume = de_duped_mice_df["Tumor Volume (mm3)"]
# creating a list and dataframe to pull specific drug regimen data from
chosen_drug_regimens = ["Capomulin", "Ramicane", "Propriva", "Ceftamin"]
final_tumor_volume_regimens = de_duped_mice_df[["Tumor Volume (mm3)", "Drug Regimen"]]
# creating dataframes for tumor volumes based on four most promising regimens
capo_final_tumor_volume = final_tumor_volume_regimens.loc[(final_tumor_volume_regimens["Drug Regimen"] == "Capomulin")]
rami_final_tumor_volume = final_tumor_volume_regimens.loc[(final_tumor_volume_regimens["Drug Regimen"] == "Ramicane")]
pro_final_tumor_volume = final_tumor_volume_regimens.loc[(final_tumor_volume_regimens["Drug Regimen"] == "Propriva")]
ceft_final_tumor_volume = final_tumor_volume_regimens.loc[(final_tumor_volume_regimens["Drug Regimen"] == "Ceftamin")]
# Calculate the IQR and quantitatively determine if there are any potential outliers. - Capomulin
capo_quartiles = capo_final_tumor_volume["Tumor Volume (mm3)"].quantile(q=[0.25, 0.5, 0.75])
capo_lowerq = capo_quartiles[0.25]
capo_upperq = capo_quartiles[0.75]
capo_iqr = capo_upperq - capo_lowerq
capo_lower_bound = capo_lowerq - (1.5 * capo_iqr)
capo_upper_bound = capo_upperq + (1.5 * capo_iqr)
# Ramicane:
rami_quartiles = rami_final_tumor_volume["Tumor Volume (mm3)"].quantile(q=[0.25, 0.5, 0.75])
rami_lowerq = rami_quartiles[0.25]
rami_upperq = rami_quartiles[0.75]
rami_iqr = rami_upperq - rami_lowerq
rami_lower_bound = rami_lowerq - (1.5 * rami_iqr)
rami_upper_bound = rami_upperq + (1.5 * rami_iqr)
# Propriva:
pro_quartiles = pro_final_tumor_volume["Tumor Volume (mm3)"].quantile(q=[0.25, 0.5, 0.75])
pro_lowerq = pro_quartiles[0.25]
pro_upperq = pro_quartiles[0.75]
pro_iqr = pro_upperq - pro_lowerq
pro_lower_bound = pro_lowerq - (1.5 * pro_iqr)
pro_upper_bound = pro_upperq + (1.5 * pro_iqr)
# Ceftamin:
ceft_quartiles = ceft_final_tumor_volume["Tumor Volume (mm3)"].quantile(q=[0.25, 0.5, 0.75])
ceft_lowerq = ceft_quartiles[0.25]
ceft_upperq = ceft_quartiles[0.75]
ceft_iqr = ceft_upperq - ceft_lowerq
ceft_lower_bound = ceft_lowerq - (1.5 * ceft_iqr)
ceft_upper_bound = ceft_upperq + (1.5 * ceft_iqr)
print(f"Using iqr, we have deteremined that any Capomulin value below {capo_lower_bound} or above {capo_upper_bound} could potentially be an outlier")
print(f"Using iqr, we have deteremined that any Ramicane value below {rami_lower_bound} or above {rami_upper_bound} could potentially be an outlier")
print(f"Using iqr, we have deteremined that any Propriva value below {pro_lower_bound} or above {pro_upper_bound} could potentially be an outlier")
print(f"Using iqr, we have deteremined that any Propriva value below {ceft_lower_bound} or above {ceft_upper_bound} could potentially be an outlier")
###Output
Using iqr, we have deteremined that any Capomulin value below 20.70456164999999 or above 51.83201549 could potentially be an outlier
Using iqr, we have deteremined that any Ramicane value below 17.912664470000003 or above 54.30681135 could potentially be an outlier
Using iqr, we have deteremined that any Propriva value below 28.95110303500001 or above 82.742745555 could potentially be an outlier
Using iqr, we have deteremined that any Propriva value below 25.355449580000002 or above 87.66645829999999 could potentially be an outlier
###Markdown
Plotting box plots for each of the drug regimens side by side
###Code
data_to_plot = [capo_final_tumor_volume["Tumor Volume (mm3)"], rami_final_tumor_volume["Tumor Volume (mm3)"], pro_final_tumor_volume["Tumor Volume (mm3)"], ceft_final_tumor_volume["Tumor Volume (mm3)"]]
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plt.figure(figsize=(11, 7))
plt.boxplot(data_to_plot, labels=chosen_drug_regimens)
plt.title("Final Tumor Volume (mm3) by Drug Regimen")
plt.ylabel("Final Tumor Volume (mm3)")
plt.savefig("../Images/FinalTumorVolumeByDrug.png")
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
capo_tumor_volume_all_df = mice_sorted_df.loc[(mice_sorted_df["Drug Regimen"] == "Capomulin")]
capo_tumor_time_df = capo_tumor_volume_all_df[["Mouse ID", "Timepoint", "Tumor Volume (mm3)", "Weight (g)"]]
#selecting individual mouse for line and scatter plots
b128_df = capo_tumor_time_df.loc[(capo_tumor_time_df["Mouse ID"] == "b128")]
b128_df
timepoint_x_axis = b128_df["Timepoint"]
tumor_volume_y_axis = b128_df["Tumor Volume (mm3)"]
plt.plot(timepoint_x_axis, tumor_volume_y_axis, marker="+",color="red", linewidth=1.5)
plt.title("B128 Tumor Volume (mm3) by Timepoint")
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.savefig("../Images/B128TumorVolumeByTime.png")
plt.show()
average_tumor_volume_by_weight_df = capo_tumor_time_df.groupby("Weight (g)").mean()
average_tumor_volume_by_weight_df
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
weight_x_axis = average_tumor_volume_by_weight_df.index
weight_y_axis = average_tumor_volume_by_weight_df["Tumor Volume (mm3)"]
plt.scatter(weight_x_axis, weight_y_axis, marker="o",color="blue")
plt.title("Average Tumor Volume (mm3) by Weight (g)")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.savefig("../Images/WeightByTumorVolume.png")
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
corr_coeff = stats.pearsonr(weight_x_axis, weight_y_axis)
print(f"The correlation between the average tumor size and weight for a mouse on the Capomulin regimen is {round(corr_coeff[0],2)}.")
###Output
The correlation between the average tumor size and weight for a mouse on the Capomulin regimen is0.95
###Markdown
Given that the r value for the relationship between average tumor size and weight for a mouse is close to 1, we can say that there is a strong positive correlation between the two.
###Code
# linear regression using scipy
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(weight_x_axis, weight_y_axis)
# finding regression values
regress_values = weight_x_axis * slope + intercept
# finding equation of regression line
line_equation = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(weight_x_axis, weight_y_axis, marker="o",color="blue")
plt.plot(weight_x_axis, regress_values, "--")
plt.annotate(line_equation, (20,30) ,fontsize=12,color="red")
plt.title("Average Tumor Volume (mm3) by Weight (g)")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.savefig("../Images/RegressionWeightByTumorVolume.png")
plt.show()
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
Mean_tumor_vol = clinical_data_complete.groupby(["Drug","Timepoint"]).mean()["Tumor Volume (mm3)"]
Mean_tumor_vol
# Convert to DataFrame
Mean_tumor_vol = pd.DataFrame(Mean_tumor_vol)
Mean_tumor_vol = Mean_tumor_vol.reset_index()
# Preview DataFrame
Mean_tumor_vol.head()
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
Tumor_vol_SE = clinical_data_complete.groupby(["Drug", "Timepoint"]).sem()["Tumor Volume (mm3)"]
# Convert to DataFrame
Tumor_vol_SE = pd.DataFrame(Tumor_vol_SE)
# Preview DataFrame
Tumor_vol_SE.head().reset_index()
# Minor Data Munging to Re-Format the Data Frames
Mean_tumor_vol = Mean_tumor_vol.reset_index()
Mean_tumor_vol_pivot_mean = Mean_tumor_vol.pivot(index="Timepoint", columns="Drug")["Tumor Volume (mm3)"]
# Preview that Reformatting worked
Mean_tumor_vol_pivot_mean.head()
# Minor Data Munging to Re-Format the Data Frames
Tumor_vol_SE = Tumor_vol_SE.reset_index()
Tumor_vol_pivot_SE = Tumor_vol_SE.pivot(index="Timepoint", columns="Drug")["Tumor Volume (mm3)"]
# Preview that Reformatting worked
Tumor_vol_pivot_SE.head()
# Generate the Plot (with Error Bars)
plt.errorbar(Mean_tumor_vol_pivot_mean.index, Mean_tumor_vol_pivot_mean["Capomulin"], yerr=Tumor_vol_pivot_SE["Capomulin"], color="r", marker="o", markersize=5, linestyle="dashed", linewidth=0.50)
plt.errorbar(Mean_tumor_vol_pivot_mean.index, Mean_tumor_vol_pivot_mean["Infubinol"], yerr=Tumor_vol_pivot_SE["Infubinol"], color="b", marker="^", markersize=5, linestyle="dashed", linewidth=0.50)
plt.errorbar(Mean_tumor_vol_pivot_mean.index, Mean_tumor_vol_pivot_mean["Ketapril"], yerr=Tumor_vol_pivot_SE["Ketapril"], color="g", marker="s", markersize=5, linestyle="dashed", linewidth=0.50)
plt.errorbar(Mean_tumor_vol_pivot_mean.index, Mean_tumor_vol_pivot_mean["Placebo"], yerr=Tumor_vol_pivot_SE["Placebo"], color="k", marker="d", markersize=5, linestyle="dashed", linewidth=0.50)
plt.title("Tumor Response to Treatment")
plt.ylabel("Tumor Volume (mm3)")
plt.xlabel("Time (Days)")
plt.grid(True)
plt.legend(loc="best", fontsize="small", fancybox=True)
# Save the Figure
plt.savefig("C:/Users/17703/Shreya/Pymaceuticals/Fig1.png")
# Show the Figure
plt.show()
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
mouse_df = pd.merge(study_results, mouse_metadata, on="Mouse ID")
# Display the data table for preview
mouse_df.head(5)
# Checking the number of mice.
mice = mouse_df["Mouse ID"].value_counts()
number_of_mice = len(mice)
number_of_mice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mice = mouse_df.loc[mouse_df.duplicated(subset=['Mouse ID', 'Timepoint',]),'Mouse ID'].unique()
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = mouse_df[mouse_df['Mouse ID'].isin(duplicate_mice)==False]
# Checking the number of mice in the clean DataFrame.
clean_mice=clean_df["Mouse ID"].value_counts()
clean_number_of_mice=len(clean_mice)
clean_number_of_mice
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
mean = clean_df['Tumor Volume (mm3)'].groupby(clean_df['Drug Regimen']).mean()
median = clean_df['Tumor Volume (mm3)'].groupby(clean_df['Drug Regimen']).median()
var = clean_df['Tumor Volume (mm3)'].groupby(clean_df['Drug Regimen']).var()
std = clean_df['Tumor Volume (mm3)'].groupby(clean_df['Drug Regimen']).std()
sem = clean_df['Tumor Volume (mm3)'].groupby(clean_df['Drug Regimen']).sem()
summary_stat = pd.DataFrame({"Mean Tumor Volume":mean,
"Median Tumor Volume":median,
"Tumor Volume Variance":var,
"Tumor Volume Std. Dev.":std,
"Tumor Volume SEM.":sem})
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
summary_stat
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regime
# Using the aggregation method, produce the same summary statistics in a single line
summary_agg = clean_df.groupby(['Drug Regimen'])[['Tumor Volume (mm3)']].agg(['mean', 'median', 'var', 'std', 'sem'])
summary_agg
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
mice_count = clean_df["Drug Regimen"].value_counts()
plot_pandas = mice_count.plot.bar(color='b')
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Mice")
plt.title("Number of Mice per Treatment")
plt.show
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
x_axis = mice_count.index.values
y_axis = mice_count.values
plt.bar(x_axis, y_axis, color='b', alpha=0.8, align='center')
plt.title("Number of Mice Tested per Treatment")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Mice")
plt.xticks(rotation="vertical")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_data = clean_df["Sex"].value_counts()
plt.title("Female vs. Male Mice")
gender_data.plot.pie(autopct= "%1.1f%%")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
labels = ['Female', 'Male']
sizes = [49.7999197, 50.200803]
plot = gender_data.plot.pie(y='Total Count', autopct="%1.1f%%")
plt.title('Male vs Female Mouse Population')
plt.ylabel('Sex')
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
max_timepoint_df = pd.DataFrame(clean_df.groupby('Mouse ID')['Timepoint'].max().sort_values()).reset_index().rename(columns={'Timepoint': 'max_timepoint'})
max_timepoint_df
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
merged_df = pd.merge(clean_df, max_timepoint_df, on='Mouse ID')
merged_df.head()
# Put treatments into a list for for loop (and later for plot labels)
treatments = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
# Create empty list to fill with tumor vol data (for plotting)
tumor_vol_data = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for treatment in treatments:
# Locate the rows which contain mice on each drug and get the tumor volumes
drug_df = merged_df.loc[merged_df['Drug Regimen'] == treatment]
# add subset
final_df = drug_df.loc[drug_df['Timepoint'] == drug_df['max_timepoint']]
# Determine outliers using upper and lower bounds
values = final_df['Tumor Volume (mm3)']
tumor_vol_data.append(values)
quartiles = values.quantile([.25, .5, .75])
lowerq = quartiles[.25]
upperq = quartiles[.75]
iqr = upperq - lowerq
print(f'IQR for {treatment}: {iqr}')
#identify outliers
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f'Lower Bound for {treatment}: {lower_bound}')
print(f'Upper Bound for {treatment}: {upper_bound}')
#Check for ouliers
outliers_count = (values.loc[(final_df['Tumor Volume (mm3)'] >= upper_bound) |
(final_df['Tumor Volume (mm3)'] <= lower_bound)]).count()
print(f'Number of {treatment} outliers: {outliers_count}')
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plot = dict(marker='o', markerfacecolor='r', markersize=8, markeredgecolor='black')
plt.boxplot(tumor_vol_data)
plt.title('Final Tumor Volume by Drug')
plt.ylabel('Final Tumor Volume (mm3)')
plt.xticks([1, 2, 3, 4], ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin'])
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
merged_df.loc[merged_df['Drug Regimen'] == 'Capomulin']
mouse = merged_df.loc[merged_df['Mouse ID'] == 's185']
#Plot line chart
plt.plot(mouse['Timepoint'], mouse['Tumor Volume (mm3)'], marker='o', color = 'b')
plt.show
# Add labels and title to plot
plt.xlabel("Time (days)")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Capomulin Treatment of Mouse s185")
# Display plot
plt.show()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
#capomulin
capomulin_df = clean_df.loc[clean_df['Drug Regimen'] == 'Capomulin']
capomulin_df
#Average Tumor volume
avg_vol_df = pd.DataFrame(capomulin_df.groupby('Mouse ID')['Tumor Volume (mm3)'].mean()).reset_index().rename(columns={'Tumor Volume (mm3)': 'avg_tumor_vol'})
avg_vol_df
avg_vol_df = pd.merge(capomulin_df, avg_vol_df, on='Mouse ID')
avg_vol_df
x_data = avg_vol_df['Weight (g)']
y_data = avg_vol_df['avg_tumor_vol']
plt.scatter(x_data, y_data)
# Add labels and title to plot
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.title('Average Tumor Volume by Weight')
# Display plot
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
capomulin_df = clean_df.loc[clean_df['Drug Regimen'] == 'Capomulin']
#Average Tumor volume
avg_vol_df = pd.DataFrame(capomulin_df.groupby('Mouse ID')['Tumor Volume (mm3)'].mean()).reset_index().rename(columns={'Tumor Volume (mm3)': 'avg_tumor_vol'})
avg_vol_df = pd.merge(capomulin_df, avg_vol_df, on='Mouse ID')
x_data = avg_vol_df['Weight (g)']
y_data = avg_vol_df['avg_tumor_vol']
#calculate correlation coefficient
correlation_coef = st.pearsonr(x_data, y_data)
# Print the answer to above calculation
print(f'The correlation between weight and average tumor volume for Capomulin regimen is {round(correlation_coef[0],2)}.')
# Calculate linear regression
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_data, y_data)
regress_values = x_data * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
#plot scatter and linear regression
plt.scatter(x_data, y_data)
plt.plot(x_data, regress_values, 'r-')
# Add labels and title to plot
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.title('Average Tumor Volume by Weight')
plt.show()
###Output
The correlation between weight and average tumor volume for Capomulin regimen is 0.83.
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_df = pd.merge(mouse_metadata, study_results, on='Mouse ID')
combined_df
# Checking the number of mice in the DataFrame.
combined_df.count()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
id_timepoint_df=combined_df[combined_df.duplicated(['Mouse ID', 'Timepoint'])]
id_timepoint_df
# Optional: Get all the data for the duplicate mouse ID.
combined_df[combined_df.duplicated(["Mouse ID"])]
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = combined_df.drop_duplicates(subset=['Mouse ID','Timepoint'], keep="first", inplace=False)
clean_df
# Checking the number of mice in the clean DataFrame.
clean_df.count()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
regimen_tumor_group = clean_df.groupby(["Drug Regimen"])
tumor_mean = regimen_tumor_group["Tumor Volume (mm3)"].mean()
tumor_median = regimen_tumor_group["Tumor Volume (mm3)"].median()
tumor_var = regimen_tumor_group["Tumor Volume (mm3)"].var()
tumor_std = regimen_tumor_group["Tumor Volume (mm3)"].std()
tumor_sem = regimen_tumor_group["Tumor Volume (mm3)"].sem()
tumor_summary = pd.DataFrame ({"Mean": tumor_mean,
"Median":tumor_median,
"Variance":tumor_var,
"Standard Deviation":tumor_std,
"SEM": tumor_sem})
tumor_summary = tumor_summary.sort_values(["SEM"])
tumor_summary
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
###Output
_____no_output_____
###Markdown
Bar Plots
###Code
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
mice_and_drug = clean_df["Drug Regimen"].value_counts()
mice_and_drug
mice_and_drug.plot(kind = "bar")
plt.title("Mice per Treatment")
plt.ylabel("Number of Mice")
plt.show()
plt.tight_layout()
x_axis = clean_df["Drug Regimen"].value_counts()
y_axis= clean_df["Drug Regimen"].value_counts()
x_axis
x_axis = np.arange(len(drug_regimen))
# x_axis = np.arange(len(mice_and_drug))
# mice = [230, 178, 178, 188, 186, 181, 156, 228, 181, 182]
# mice = clean_df["Mouse ID"].value_counts()
# Hint: plt.bar; index (x_values) - use index to plot vertically and horizontally (values)
#value_counts unique ids
plt.bar(x_axis, y_axis, color='blue', alpha=0.65, align="center")
tick_locations = [value for value in x_axis]
# plt.xticks(tick_locations, ["Capomulin", "Ceftamin", "Infubinol", "Ketapril", "Naftisol", "Placebo", "Propriva",
# "Ramicane", "Stelasyn", "Zoniferol"], rotation = "vertical")
plt.xlabel("Drug Regmimen")
plt.ylabel("Number of Mice")
plt.title("Mice per Treament")
plt.show()
###Output
_____no_output_____
###Markdown
Pie Plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# counts = clean_df.groupby(["Sex", "Mouse ID"]).count()
counts = clean_df.groupby("Sex")["Mouse ID"].nunique()
counts
counts.plot(kind = "pie", autopct = "%1.1f%%", startangle =180)
plt.title("Sex")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
counts = clean_df.groupby("Sex")["Mouse ID"].nunique()
# counts
labels = ["Male","Female"]
sizes = [125, 124]
colors = ["orange", "blue"]
plt.title("Sex")
plt.pie(sizes, labels= labels, colors = colors, autopct = "%1.1f%%")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Capomulin Merge
cap_df = clean_df.loc[clean_df["Drug Regimen"]=="Capomulin"]
cap_df = cap_df.groupby('Mouse ID')['Timepoint'].max()
cap_time = pd.DataFrame(cap_df)
cap_merge = pd.merge(cap_time, clean_df, on=("Mouse ID","Timepoint"),how="left")
# cap_merge
cap_tumor = cap_merge["Tumor Volume (mm3)"]
cap_tumor
#Ramicane Merge
ram_df = clean_df.loc[clean_df["Drug Regimen"]=="Ramicane"]
ram_df = ram_df.groupby('Mouse ID')['Timepoint'].max()
ram_time = pd.DataFrame(ram_df)
ram_merge = pd.merge(ram_time, clean_df, on=("Mouse ID","Timepoint"),how="left")
# ram_merge
ram_tumor = ram_merge["Tumor Volume (mm3)"]
ram_tumor
#Infubinol Merge
inf_df = clean_df.loc[clean_df["Drug Regimen"]=="Infubinol"]
inf_df = inf_df.groupby('Mouse ID')['Timepoint'].max()
inf_time = pd.DataFrame(inf_df)
inf_merge = pd.merge(inf_time, clean_df, on=("Mouse ID","Timepoint"),how="left")
inf_merge
inf_tumor = inf_merge["Tumor Volume (mm3)"]
inf_tumor
# Ceftamin Merge
cef_df = clean_df.loc[clean_df["Drug Regimen"]=="Ceftamin"]
cef_df = cef_df.groupby('Mouse ID')['Timepoint'].max()
cef_time = pd.DataFrame(cef_df)
cef_merge = pd.merge(cef_time, clean_df, on=("Mouse ID","Timepoint"),how="left")
cef_merge
cef_tumor = cef_merge["Tumor Volume (mm3)"]
cef_tumor
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Capomulin Quartiles
quartiles = cap_tumor.quantile([.25,.5,.75])
cap_lowerq = quartiles[.25]
cap_upperq = quartiles[.75]
cap_iqr = cap_upperq - cap_lowerq
cap_lower_bound = cap_lowerq - (1.5*cap_iqr)
cap_upper_bound = cap_upperq + (1.5*cap_iqr)
print(f"The Capomulin lower quartile of Tumor Volume is: {cap_lowerq}")
print(f"The Capomulin upper quartile of Tumor Volume is: {cap_upperq}")
print(f"The Capomulin interquartile range of Tumor Volume is: {cap_iqr}")
print(f"The median of Capomulin is: {quartiles[0.5]} ")
print(f"Capomulin values below {cap_lower_bound} could be outliers.")
print(f"Caponulin values above {cap_upper_bound} could be outliers.")
print(f"-------------------------------------------------------------")
#Ramicane Quartiles
ram_quartiles = ram_tumor.quantile([.25,.5,.75])
ram_lowerq = ram_quartiles[.25]
ram_upperq = ram_quartiles[.75]
ram_iqr = ram_upperq - ram_lowerq
ram_lower_bound = ram_lowerq - (1.5*ram_iqr)
ram_upper_bound = ram_upperq + (1.5*ram_iqr)
print(f"The Ramicane lower quartile of Tumor Volume is: {ram_lowerq}")
print(f"The Ramicane upper quartile of Tumor Volume is: {ram_upperq}")
print(f"The Ramicane interquartile range of Tumor Volume is: {ram_iqr}")
print(f"The median of Ramicane is: {ram_quartiles[0.5]} ")
print(f"Ramicane values below {ram_lower_bound} could be outliers.")
print(f"Ramicane values above {ram_upper_bound} could be outliers.")
print(f"--------------------------------------------------------------")
# Infubinol Quartiles
inf_quartiles = inf_tumor.quantile([.25,.5,.75])
inf_lowerq = inf_quartiles[.25]
inf_upperq = inf_quartiles[.75]
inf_iqr = inf_upperq - inf_lowerq
inf_lower_bound = inf_lowerq - (1.5*inf_iqr)
inf_upper_bound = inf_upperq + (1.5*inf_iqr)
print(f"The Infubinol lower quartile of Tumor Volume is: {inf_lowerq}")
print(f"The Infubinol upper quartile of Tumor Volume is: {inf_upperq}")
print(f"The Infubinol interquartile range of Tumor Volume is: {inf_iqr}")
print(f"The median of Infubinol is: {inf_quartiles[0.5]} ")
print(f"Infubinol values below {inf_lower_bound} could be outliers.")
print(f"Infubinol values above {inf_upper_bound} could be outliers.")
print(f"--------------------------------------------------------------")
# Ceftamin Quartiles
# cef_df = clean_df.loc[clean_df["Drug Regimen"]=="Ceftamin"]
# cef_tumor = cef_df["Tumor Volume (mm3)"]
cef_quartiles = cef_tumor.quantile([.25,.5,.75])
cef_lowerq = cef_quartiles[.25]
cef_upperq = cef_quartiles[.75]
cef_iqr = cef_upperq - cef_lowerq
cef_lower_bound = cef_lowerq - (1.5*cef_iqr)
cef_upper_bound = cef_upperq + (1.5*cef_iqr)
print(f"The Ceftamin lower quartile of Tumor Volume is: {cef_lowerq}")
print(f"The Ceftamin upper quartile of Tumor Volume is: {cef_upperq}")
print(f"The Ceftamin interquartile range of Tumor Volume is: {cef_iqr}")
print(f"The Ceftamin the median of temperatures is: {cef_quartiles[0.5]} ")
print(f"Ceftamin values below {cef_lower_bound} could be outliers.")
print(f"Ceftamin values above {cef_upper_bound} could be outliers.")
# tumor_summary
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
box_plot = [cap_tumor, ram_tumor, inf_tumor, cef_tumor]
fig1, ax1 = plt.subplots()
ax1.set_title('Tumors')
ax1.set_ylabel('Final Tumor Volume (mm3)')
ax1.set_xlabel('Drug Regimen')
ax1.boxplot(box_plot, labels=["Capomulin","Ramicane", "Infubinol", "Ceftamin"])
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# cap_tumor_id = cap_df[["Timepoint", "Tumor Volume (mm3)", "Mouse ID"]]
# cap_tumor_id
drug_df = clean_df.loc[clean_df["Drug Regimen"]=="Capomulin"]
one_mouse = drug_df.loc[drug_df["Mouse ID"] == "l509"]
one_mouse
x_axis = one_mouse["Timepoint"]
y_axis = one_mouse["Tumor Volume (mm3)"]
plt.plot(x_axis, y_axis,linewidth=3)
plt.xlabel("Timepoint (days)")
plt.ylabel("Tumoe Volume (mm3)")
plt.title("Capomulin of l509")
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
tum_avg = drug_df.groupby(["Mouse ID"]).mean()
tum_avg
x_values = tum_avg["Weight (g)"]
y_values = tum_avg["Tumor Volume (mm3)"]
plt.scatter(x_values, y_values)
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.title("Weight vs Avg Tumor for Capomulin Regimen")
plt.show()
# weight_tumor = drug_df[["Weight (g)", "Tumor Volume (mm3)"]]
# weight_tumor
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
from scipy.stats import linregress
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
weight = tum_avg["Weight (g)"]
avg_tumor = tum_avg["Tumor Volume (mm3)"]
(slope, intercept, rvalue, pvalue, stderr) = linregress(weight, avg_tumor)
regress_values = weight* slope + intercept
correlation = st.pearsonr(weight, avg_tumor)
plt.scatter(weight, avg_tumor)
plt.plot(weight,regress_values,"r-")
# plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
print(f"The correlation between mouse weight and the average tumor volume is {round(correlation[0],2)}")
plt.show()
###Output
The correlation between mouse weight and the average tumor volume is 0.84
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation,
# and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
#group by Drug Regimen and get aggregate values on Tumor Volume mm3 column
tumor_volume_mean = merged_data_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].mean()
tumor_volume_median = merged_data_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].median()
tumor_volume_variance = merged_data_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].var()
tumor_volume_std = merged_data_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].std()
tumor_volume_sem = merged_data_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].sem()
#Add each series to a summary data frame
drug_regimen_summary_table_df = pd.DataFrame(tumor_volume_mean)
drug_regimen_summary_table_df = drug_regimen_summary_table_df.rename(columns={"Tumor Volume (mm3)" : "Tumor Volume (mm3) Mean"})
drug_regimen_summary_table_df["Tumor Volume (mmr) Median"] = tumor_volume_median
drug_regimen_summary_table_df["Tumor Volume (mmr) Variance"] = tumor_volume_variance
drug_regimen_summary_table_df["Tumor Volume (mmr) STD"] = tumor_volume_std
drug_regimen_summary_table_df["Tumor Volume (mmr) SEM"] = tumor_volume_sem
drug_regimen_summary_table_df
# Generate a summary statistics table of mean, median, variance, standard deviation,
# and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
summary_table_by_Regimen = merged_data_df.groupby("Drug Regimen")
summary_table_by_Regimen = summary_table_by_Regimen.agg(['mean','median','var','std','sem'])["Tumor Volume (mm3)"]
summary_table_by_Regimen
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
total_measurements_bar_plot_df = merged_data_df.groupby("Drug Regimen")["Mouse ID"].nunique()
bar_plot = total_measurements_bar_plot_df.plot.bar(title="Total Measurements by Drug Regimen")
bar_plot.set_xlabel("Drug Regimen")
bar_plot.set_ylabel("Total Measurements")
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
total_measurements_bar_plot_df.plot.bar()
plt.title("Total Measurements by Drug Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Total Measurements")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
mouse_gender_data = merged_data_df["Sex"].value_counts()
explode = (.1,0)
pie_chart = mouse_gender_data.plot.pie(title="Distribution of Female vs. Male Mice", explode = explode, autopct="%1.1f%%", startangle=140, shadow=True)
pie_chart.set_ylabel("")
pie_chart.axis("equal")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(mouse_gender_data, labels = mouse_gender_data.index.values,autopct="%1.1f%%", explode = explode, shadow=True, startangle=140)
plt.title("Distribution of Female vs. Male Mice")
plt.axis("equal")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
greatest_timepoint_df = pd.DataFrame(merged_data_df.groupby("Mouse ID")["Timepoint"].max())
greatest_timepoint_df
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
#inner join on Mouse ID and Timepoint gives us only the Max Timepoint value we're interested in
max_timepoint_dataset_df = pd.merge(greatest_timepoint_df, merged_data_df, on=("Mouse ID", "Timepoint"))
max_timepoint_dataset_df.head(15)
# Put treatments into a list for for loop (and later for plot labels)
treatments_list = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
# Create empty list to fill with tumor vol data (for plotting)
tumor_volume_data = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for treatment in treatments_list:
# Locate the rows which contain mice on each drug and get the tumor volumes
treatment_subset_df = max_timepoint_dataset_df.loc[max_timepoint_dataset_df['Drug Regimen'] == treatment]
tumor_volume = treatment_subset_df["Tumor Volume (mm3)"]
# add subset
tumor_volume_data.append(tumor_volume)
# Determine outliers using upper and lower bounds
quartiles = tumor_volume.quantile([.25, .5, .75])
lowerq = quartiles[.25]
upperq = quartiles[.75]
iqr = upperq-lowerq
#lower and Upper bound calculations
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
#outliers
#Count the number of times our values are below the lower bound, or above the upper bound
outliers_count = (tumor_volume.loc[
(treatment_subset_df["Tumor Volume (mm3)"] <= lower_bound) |
(treatment_subset_df["Tumor Volume (mm3)"] >= upper_bound)]).count()
print(f"------------------")
print(f"Drug: {treatment}")
print(f"------------------")
print(f" IQR: {iqr}")
print(f" Upper Bound: {upper_bound}")
print(f" Lower Bound: {lower_bound}")
print(f" Number of Outliers: {outliers_count}")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plt.boxplot(tumor_volume_data)
plt.xticks([1, 2, 3, 4], treatments_list)
plt.title("Final Tumor Volume by Treatment")
plt.xlabel("Treatment")
plt.ylabel("Final Tumor Volume (mm3)")
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
mouse_to_plot = merged_data_df.loc[merged_data_df["Drug Regimen"] == "Capomulin"]["Mouse ID"].values[0]
mouse_to_plot_df = merged_data_df.loc[merged_data_df["Mouse ID"] == mouse_to_plot]
mouse_to_plot_df
plt.plot(mouse_to_plot_df["Timepoint"], mouse_to_plot_df["Tumor Volume (mm3)"])
plt.title(f"Tumor Volume vs. Timepoint with Campolumin for test mouse {mouse_to_plot}")
plt.xlabel("Time")
plt.ylabel("Tumor Volume (mm3)")
plt.show()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
capomulin_treatment_df = merged_data_df.loc[merged_data_df["Drug Regimen"] == "Capomulin"]
capomulin_treatment_df
average_tumor_volume_df = pd.DataFrame(capomulin_treatment_df.groupby("Mouse ID")["Tumor Volume (mm3)"].mean())
# average_tumor_volume_df
scatter_plot_df = pd.merge(capomulin_treatment_df, average_tumor_volume_df, on="Mouse ID")
scatter_plot_df = scatter_plot_df.rename(columns={"Tumor Volume (mm3)_x" : "Tumor Volume (mm3)", "Tumor Volume (mm3)_y":"Average Tumor Volume"})
# scatter_plot_df
x_axis = scatter_plot_df["Weight (g)"]
y_axis = scatter_plot_df["Average Tumor Volume"]
plt.scatter(x_axis, y_axis)
plt.title("Average Tumor Volume vs. Weight (g)")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
#Correlation Coefficient
correlation_coeff = st.pearsonr(x_axis, y_axis)
print(f"The correlation coefficient: {round(correlation_coeff[0],2)}.")
#Linear Regression
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_axis, y_axis)
regression_value = slope * x_axis + intercept
line_equation = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
# line_equation
#Replot scatter plot with linear regression information
plt.scatter(x_axis, y_axis)
plt.plot(x_axis, regression_value, "r-")
plt.annotate(line_equation, (22, 35), color="red")
plt.title("Average Tumor Volume vs. Weight (g)")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.show()
###Output
The correlation coefficient: 0.83.
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
meta = pd.read_csv(mouse_metadata_path)
results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
datameta.merge(results,on = 'Mouse ID')
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
import seaborn as sns
from scipy.stats import linregress
from sklearn import datasets
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
Merged_Data = pd.merge(mouse_metadata, study_results, on = "Mouse ID", how = "inner")
Merged_Data
# Checking the number of mice in the DataFrame.
Merged_Data['Mouse ID'].nunique()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
Merged_Data_dup= Merged_Data.duplicated(['Mouse ID','Timepoint'])
Merged_Data_dup
Duplicate_mouse = Merged_Data.loc[Merged_Data_dup]['Mouse ID'].unique()
Duplicate_mouse
# Optional: Get all the data for the duplicate mouse ID.
Mask = Merged_Data['Mouse ID']!='g989'
Mask
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
Cleaned_data = Merged_Data.loc[Mask]
Cleaned_data
Cleaned_data['Mouse ID']
# Checking the number of mice in the clean DataFrame.
Merged_Data.loc[Mask]
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straightforward, creating multiple series and putting them all together at the end.
Cleaned_data_stat= Cleaned_data.groupby('Drug Regimen')['Tumor Volume (mm3)']
Cleaned_data_stat
Cleaned_data_stat.mean()
Cleaned_data_stat.median()
Cleaned_data_stat.var()
Cleaned_data_stat.std()
Cleaned_data_stat.sem()
SummaryTable_CleanedData =pd.DataFrame({"Mean ": Cleaned_data_stat.mean(),
"Median": Cleaned_data_stat.median(),
"Variance": Cleaned_data_stat.var(),
"Standard deviation":Cleaned_data_stat.std(),
"SEM": Cleaned_data_stat.sem()})
SummaryTable_CleanedData
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function.
SummaryTable_CleanedData2 = Cleaned_data.groupby('Drug Regimen').agg(['mean','median','var','std','sem'])["Tumor Volume (mm3)"]
SummaryTable_CleanedData2
###Output
_____no_output_____
###Markdown
Bar Plots
###Code
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
A = Merged_Data.groupby(['Timepoint','Drug Regimen'])['Mouse ID'].count().reset_index()
A['Timepoint']= A['Timepoint'].astype(str).apply(leftpad)
G= A.pivot(index='Timepoint',columns='Drug Regimen',values='Mouse ID')
def leftpad(timepoint):
if len(timepoint) ==1:
timepoint='0'+timepoint
return timepoint
G.plot.bar(figsize=(25,15), fontsize = 18, width=0.8)
plt.xlabel("Timepoint",fontsize = 18)
plt.ylabel("Number of Mice",fontsize = 18)
plt.title("Number of Mice per Treatment",fontsize = 18)
plt.savefig("../images/treatmentBar.png", bbox_inches = "tight")
plt.show()
# Generate a bar plot showing the number of total observation across all timepoints for each treatment.
G
# Generate a bar plot showing the number of total observation across all timepoints for each treatment.
Sorted_G = G.sum().sort_values(ascending=False)
G.sum().sort_values(ascending=False)
x_axis = np.arange(len(Sorted_G))
# #for i in col timepoint:
tick_locations = [value for value in x_axis]
plt.xlim(-0.75, len(x_axis)-0.25)
plt.ylim(0,max(Sorted_G))
plt.xticks(tick_locations,['Capomulin', 'Ceftamin', 'Infubinol', 'Ketapril', 'Naftisol', 'Placebo', 'Propriva', 'Ramicane', 'Stelasyn', 'Zoniferol'], rotation='vertical')
plt.title("Number of Mice per Treatment",fontsize = 20)
plt.xlabel("Drug Regimen",fontsize = 14)
plt.ylabel("Number of Mice",fontsize = 14)
plt.bar(x=x_axis,height=Sorted_G)
plt.savefig("../images/totalobservationVSAllBar.png", bbox_inches = "tight")
plt.show()
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pyplot.
###Output
_____no_output_____
###Markdown
Pie Plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
sizes = mouse_metadata.groupby(["Sex"]).count()['Mouse ID']
sizes
plt.pie(
sizes,
labels=["Male", "Female"],
explode=[0.1, 0],
colors=["Grey","Purple"],
autopct="%1.1f%%",
shadow=True,
startangle=120
)
plt.savefig("../images/PieFemaleVsMale.png", bbox_inches = "tight")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plot = sizes.plot.pie(y=sizes,figsize=(15,4),startangle=140, explode = (0.1, 0), shadow = True, autopct="%1.1f%%")
plt.savefig("../images/PieFemaleVsMalePyplot.png", bbox_inches = "tight")
plt.title('Male vs Female Mouse Population',fontsize = 15)
Merged_Data.columns
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
Capomulin_df = Merged_Data.loc[Merged_Data['Drug Regimen'] == 'Capomulin',:]
Capomulin_range = Capomulin_df.groupby('Mouse ID').max()['Timepoint']
Capomulin_volume = pd.DataFrame(Capomulin_range)
Capomulin_merge = pd.merge(Capomulin_volume, Merged_Data, on=("Mouse ID","Timepoint"),how="left")
Capomulin_merge.head()
Capomulin_quartiles = Capomulin_merge['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = Capomulin_quartiles[0.25]
upperq = Capomulin_quartiles[0.75]
iqr = upperq - lowerq
iqr
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
lower_bound
upper_bound
print(f"The lower quartile of Capomulin tumors: {lowerq}")
print(f"The upper quartile of Capomulin tumors: {upperq}")
print(f"The interquartile range of Capomulin tumors: {iqr}")
print(f"Vol below {lower_bound} looks like the outliers.")
print(f"Vol above {upper_bound} looks like the outliers.")
#Ramicane_df
Ramicane_df = Merged_Data.loc[Merged_Data['Drug Regimen'] == 'Ramicane',:]
Ramicane_range = Ramicane_df.groupby('Mouse ID').max()['Timepoint']
Ramicane_volume = pd.DataFrame(Ramicane_range)
Ramicane_merge = pd.merge(Ramicane_volume, Merged_Data, on=("Mouse ID","Timepoint"),how="left")
Ramicane_merge.head()
Ramicane_quartiles = Ramicane_merge['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
iqr
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
lower_bound
upper_bound
print(f"The lower quartile of Ramicane tumors: {lowerq}")
print(f"The upper quartile of Ramicane tumors: {upperq}")
print(f"The interquartile range of Ramicane tumors: {iqr}")
print(f"Vol below {lower_bound} looks like the outliers.")
print(f"Vol above {upper_bound} looks like the outliers.")
# For Infubinol_df
Infubinol_df = Merged_Data.loc[Merged_Data['Drug Regimen'] == 'Infubinol',:]
Infubinol_range = Infubinol_df.groupby('Mouse ID').max()['Timepoint']
Infubinol_volume = pd.DataFrame(Infubinol_range)
Infubinol_merge = pd.merge(Infubinol_volume, Merged_Data, on=("Mouse ID","Timepoint"),how="left")
Infubinol_merge.head()
Infubinol_quartiles = Infubinol_merge['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
iqr
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
lower_bound
upper_bound
print(f"The lower quartile of Infubinol tumors: {lowerq}")
print(f"The upper quartile of Infubinol tumors: {upperq}")
print(f"The interquartile range of Infubinol tumors: {iqr}")
print(f"Vol below {lower_bound} looks like the outliers.")
print(f"Vol above {upper_bound} looks like the outliers.")
# Ceftamin_df
Ceftamin_df = Merged_Data.loc[Merged_Data['Drug Regimen'] == 'Ceftamin',:]
Ceftamin_range = Ceftamin_df.groupby('Mouse ID').max()['Timepoint']
Ceftamin_volume = pd.DataFrame(Ceftamin_range)
Ceftamin_merge = pd.merge(Ceftamin_volume, Merged_Data, on=("Mouse ID","Timepoint"),how="left")
Ceftamin_merge.head()
Ceftamin_quartiles = Ceftamin_merge['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
iqr
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
lower_bound
upper_bound
print(f"The lower quartile of Ceftamin tumors: {lowerq}")
print(f"The upper quartile of Ceftamin tumors: {upperq}")
print(f"The interquartile range of Ceftamin tumors: {iqr}")
print(f"Vol below {lower_bound} looks like the outliers.")
print(f"Vol above {upper_bound} looks like the outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plot = [Capomulin_quartiles, Ramicane_quartiles, Infubinol_quartiles, Ceftamin_quartiles]
Regimen= ['Capomulin', 'Ramicane', 'Infubinol','Ceftamin']
fig1, ax1 = plt.subplots(figsize=(15, 10))
ax1.set_title('Tumor Volume for each mouse',fontsize =20)
ax1.set_ylabel('Final Tumor Volume (mm3)',fontsize = 10)
ax1.set_xlabel('Drug Regimen',fontsize = 10)
ax1.boxplot(data_to_plot, labels=Regimen, widths = 0.3, patch_artist=True,vert=True)
plt.ylim(10, 70)
plt.savefig("../images/BoxPlot.png", bbox_inches = "tight")
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
#g316
Treated_Capomulin_g316 = Capomulin_df.loc[Capomulin_df["Mouse ID"] == "g316",:]
Treated_Capomulin_g316
x_axis = Treated_Capomulin_g316["Timepoint"]
size_ofTumor = Treated_Capomulin_g316["Tumor Volume (mm3)"]
fig1, ax1 = plt.subplots(figsize=(15, 10))
plt.title('Capomulin treatmeant for mouse g316',fontsize =20)
plt.plot(x_axis, size_ofTumor,linewidth=2, markersize=15,marker="o",color="Purple", label="Fahreneit")
plt.xlabel('Timepoint (Days)',fontsize =14)
plt.ylabel('Tumor Volume (mm3)',fontsize =14)
plt.savefig("../images/LinePlot.png", bbox_inches = "tight")
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
x_axisB = Capomulin_df.groupby('Mouse ID')['Weight (g)'].mean()
x_axisB
Avg_TUmo = Capomulin_df.groupby('Mouse ID')['Tumor Volume (mm3)'].mean()
Avg_TUmo
plt.scatter(x_axisB, Avg_TUmo, s=120, c= 'Purple')
plt.title('Capomulin Average, Weight (g) Vs Tumor Volume (mm3) ',fontsize =14)
plt.xlabel('Weight(g)',fontsize =14)
plt.ylabel('Tumor Volume (mm3)',fontsize =14)
plt.savefig("../images/ScatterPlot.png", bbox_inches = "tight")
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
corr=round(st.pearsonr(x_axisB ,Avg_TUmo)[0],3)
corr
x_values = x_axisB
y_values = Avg_TUmo
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
print(f"slope:{slope}")
print(f"intercept:{intercept}")
print(f"rvalue (Correlation coefficient):{rvalue}")
print(f"Correlation Coefficient :{corr}")
print(f"stderr:{stderr}")
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
print(line_eq)
fig1, ax1 = plt.subplots(figsize=(15, 10))
plt.scatter(x_values,y_values,s=155, color="Purple")
plt.plot(x_values,regress_values,"r-")
plt.title('Regression graph, Mouse weight (g) Vs Average Tumor volume(mm3)',fontsize =20)
plt.xlabel('Weight(g)',fontsize =14)
plt.ylabel('Average Tumore Volume (mm3)',fontsize =14)
ax1.annotate(line_eq, xy=(20, 40), xycoords='data',xytext=(0.8, 0.95), textcoords='axes fraction',horizontalalignment='right', verticalalignment='top',fontsize=30,color="blue")
print(f"The r-squared is: {rvalue**2}")
plt.savefig("../images/regressionGraph.png", bbox_inches = "tight")
plt.show()
###Output
The r-squared is: 0.7088568047708717
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
merged_df = pd.DataFrame()
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
C = df.groupby(["Drug","Timepoint"])
# Create a variable to store the mean of the Tumor volume
F = groupbydrug_timepoint["Tumor Volume (mm3)"].mean()
# Convert to DataFrame
drugtimetumor=pd.DataFrame({"Mean of the tumor volume":F})
# Preview DataFrame
print("Impact of drug on tumor volume over time")
drugtimetumor.reset_index().head(100)
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
mice = pd.merge(study_results, mouse_metadata, on = "Mouse ID", how = 'left')
mice.head()
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
len(mice["Mouse ID"].unique())
#Duplicate Mice
duplicates = mice.loc[mice.duplicated(subset = ["Mouse ID", "Timepoint"]),"Mouse ID"].unique()
duplicates
#Clean data Frame by droping duplicates
mice = mice[mice["Mouse ID"].isin(duplicates) == False]
mice.head()
mice["Mouse ID"].nunique()
Regimen_group = mice.groupby(["Drug Regimen"])
mean = Regimen_group["Tumor Volume (mm3)"].mean()
median = Regimen_group["Tumor Volume (mm3)"].median()
variance = Regimen_group["Tumor Volume (mm3)"].var()
standard_dev = Regimen_group["Tumor Volume (mm3)"].std()
sem = Regimen_group["Tumor Volume (mm3)"].sem()
Regimens_data = pd.DataFrame({"Tumor Vol Mean" : mean, "Tumor Vol Median" : median, "Variance" : variance, "Std" : standard_dev, "SEM" : sem})
Regimens_data
Regimen_group = mice.groupby(["Drug Regimen"])
Regimen_group.head()
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
counts = mice["Drug Regimen"].value_counts()
counts.plot(kind = "bar")
plt.xlabel("Drug Reg")
plt.xticks(rotation = 90)
plt.ylabel("Number of Data Points")
plt.show()
#plt.bar(x_axis, count, color='r', alpha=0.5, align="center")
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
counts = mice["Drug Regimen"].value_counts()
plt.bar(counts.index.values, counts.values)
plt.xlabel("Drug Reg")
plt.xticks(rotation = 90)
plt.ylabel("Number of Data Points")
plt.show()
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights - The mean tumor valume for Ketapril and Naftisol had the highest standard deviation while Ramicane and Capomulin had the lowest. - The distribulation of male mouse was a little higher than female mouse. - Ramicane and Capomulin showed decreasing in tumor volume after treatment while no considerable changed was observed in Infubinol and Ceftamin groups. - There was a positive correlation between Tumor Volume and Weight loss. Dependencies and starter code
###Code
# Dependencies
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
import random
from scipy.stats import linregress
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
combined_data=pd.merge(mouse_metadata, study_results, on ="Mouse ID", how="outer")
# check if there is any null
#combined_data.isnull().sum()
combined_data
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# group by data based on the drug regim
combined_data_grouped = combined_data.groupby(["Drug Regimen"])
#calculate mean, median, variance, standard deviation, and SEM
mean = combined_data_grouped["Tumor Volume (mm3)"].mean()
median = combined_data_grouped["Tumor Volume (mm3)"].median()
variance = combined_data_grouped["Tumor Volume (mm3)"].var()
std= combined_data_grouped["Tumor Volume (mm3)"].std()
standard_errors = combined_data_grouped["Tumor Volume (mm3)"].sem()
#create a summery data frame to hold the results
summary_statistics = pd.DataFrame({"Mean": mean,
"Median":median,
"Variance":variance,
"Standard Deviation": std,
"SEM": standard_errors})
summary_statistics
###Output
_____no_output_____
###Markdown
Bar plots-pandas
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
#calculate the number of data points using group by on drug regimen and count function
reg_data_points = combined_data_grouped.count()["Mouse ID"]
# Use DataFrame.plot() in order to create a bar chart of the data
reg_data_points.plot(kind="bar", facecolor="pink")
#set chart title
plt.title("Data Points for each treatment regimen ")
plt.xlabel("Drug Regimen")
plt.ylabel("Data Points")
#show chart and set layout
plt.show()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Bar plots-Matplotlib
###Code
## Generate a bar plot showing number of data points for each treatment regimen using pyplot
#define a list to hold the number and the value of data points
x_axis = np.arange(len(reg_data_points))+1
y_axis = reg_data_points
#crreat a list to hold the drug regimen
drug_regimen = combined_data["Drug Regimen"].drop_duplicates()
#create a bar plot using pyplot
plt.bar(x_axis, y_axis, color='pink', alpha=1, align="center")
# Tell matplotlib where we would like to place each of our x axis headers
tick_locations = [x for x in x_axis]
plt.xticks(tick_locations, drug_regimen, rotation=90)
# Give our graph axis labels and title
plt.xlabel("Drug Regimen")
plt.ylabel("Data points")
plt.title("Data points for each treatment ")
#show chart and set layout
plt.show()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Pie plots-pandas
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
#difine a list to hold the sex
list_sex=["Female", "Male"]
#count the number of each sex using value.count
sex_distribution= combined_data.Sex.value_counts()
# Tells matplotlib to seperate the "feemale"and "male" section
explode = (0.1, 0)
#showing pie plot of the sex distribution using pandas.Series.plot.pie
chart_pie = sex_distribution.plot.pie(y= list_sex, explode=explode, colors=["green", "red"], figsize=(5, 5))
# Give our graph axis labels and title
plt.title("The distribution of female versus male mice")
#show chart and set layout
plt.show()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Pie plots-pyplot
###Code
# Generate a pie plot showing the distribution of female versus male mice using pyplot
# The values of each section of the pie chart
sizes = sex_distribution
# Labels for the sections of our pie chart
labels = list_sex
# The colors of each section of the pie chart
colors = ["red", "green"]
# Tells matplotlib to seperate the "feemale"and "male" section
explode = (0.1, 0)
# Creates the pie chart based upon the values above
# Automatically finds the percentages of each part of the pie chart
plt.pie(sizes, labels=list_sex, explode=explode, colors=colors, shadow=True, startangle=180)
# Tells matplotlib that we want a pie chart with equal axes
plt.axis("equal")
# Give our graph axis labels and title
plt.title("The distribution of female versus male mice")
#show chart and set layout
plt.show()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the
#IQR and quantitatively determine if there are any potential outliers.
# calculate the latest time point for each mouse
last_size = combined_data.groupby(["Mouse ID"]).max()
#reset the index of the previose data frame
last_size_reset = last_size.reset_index()
#Merge this dataframe with the first one to calculate the final tumore valume
merge_last_combined = last_size_reset[["Timepoint", "Mouse ID"]].merge(combined_data, on=["Mouse ID", "Timepoint"], how="left")
# seperate the data of each interested regimen
capumulin_volume = marge_last_combined.loc[merge_last_combined["Drug Regimen"]=="Capomulin"]["Tumor Volume (mm3)"]
ramicane_volume = marge_last_combined.loc[merge_last_combined["Drug Regimen"]=="Ramicane"]["Tumor Volume (mm3)"]
infubinol_volume = marge_last_combined.loc[merge_last_combined["Drug Regimen"]=="Infubinol"]["Tumor Volume (mm3)"]
ceftamin_volume = marge_last_combined.loc[merge_last_combined["Drug Regimen"]=="Ceftamin"]["Tumor Volume (mm3)"]
#calculte the outliers of capmolin
capumulin_qurtiles = capumulin_volume.quantile([0.25,0.5,0.75])
capumulin_lowerq = capumulin_qurtiles[0.25]
capumulin_upperq = capumulin_qurtiles[0.75]
capumulin_iqr = capumulin_upperq - capumulin_lowerq
capumulin_lower_bound = capumulin_lowerq - (1.5*capumulin_iqr)
capumulin_upper_bound = capumulin_upperq + (1.5*capumulin_iqr)
#print out the results for capomulin
print(f'The potential outliers for capomulin: {capumulin_volume.loc[(capumulin_volume<capumulin_lower_bound)|(capumulin_volume> capumulin_upper_bound)]}')
#calculte the outliers of Ramicane
ramicane_qurtiles = ramicane_volume.quantile([0.25,0.5,0.75])
ramicane_lowerq = ramicane_qurtiles[0.25]
ramicane_upperq = ramicane_qurtiles[0.75]
ramicane_iqr = ramicane_upperq - ramicane_lowerq
ramicane_lower_bound = ramicane_lowerq - (1.5*ramicane_iqr)
ramicane_upper_bound = ramicane_upperq + (1.5*ramicane_iqr)
#print out the results for Ramicane
print(f'The potential outliers for ramicane: {ramicane_volume.loc[(ramicane_volume < ramicane_lower_bound)|(ramicane_volume> ramicane_upper_bound)]}')
#calculte the outliers of Infubinol
infubinol_qurtiles = infubinol_volume.quantile([0.25,0.5,0.75])
infubinol_lowerq = infubinol_qurtiles[0.25]
infubinol_upperq = infubinol_qurtiles[0.75]
infubinol_iqr = infubinol_upperq - infubinol_lowerq
infubinol_lower_bound = infubinol_lowerq - (1.5*infubinol_iqr)
infubinol_upper_bound = infubinol_upperq + (1.5*infubinol_iqr)
#print out the results for Infubinol
print(f'The potential outliers for infubinol: {infubinol_volume.loc[(infubinol_volume < infubinol_lower_bound)|(infubinol_volume> infubinol_upper_bound)]}')
#calculte the outliers of Ceftamin
ceftamin_qurtiles = ceftamin_volume.quantile([0.25,0.5,0.75])
ceftamin_lowerq = ceftamin_qurtiles[0.25]
ceftamin_upperq = ceftamin_qurtiles[0.75]
ceftamin_iqr = ceftamin_upperq - ceftamin_lowerq
ceftamin_iqr = ceftamin_upperq - ceftamin_lowerq
ceftamin_lower_bound = ceftamin_lowerq - (1.5*ceftamin_iqr)
ceftamin_upper_bound = ceftamin_upperq + (1.5*ceftamin_iqr)
#print out the results for ceftamin
print(f'The potential outliers for ceftamin: {ceftamin_volume.loc[(ceftamin_volume < ceftamin_lower_bound)|(ceftamin_volume> ceftamin_upper_bound)]}')
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the
#IQR and quantitatively determine if there are any potential outliers.
#Define a pivot table to find the final tumore volume(final_tumor_volume)
tumor_volume = combined_data.pivot_table(values="Tumor Volume (mm3)", index = "Timepoint", columns="Drug Regimen")
# remove less interested column
tumor_volume_limited = tumor_volume[["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]]
#grap the last column of the previous table as a final volume
final_tumore_valume = tumor_volume_limited.iloc[-1,:]
#make a datafram to make it prety!!!
final_tumore_valume_df = final_tumore_valume.to_frame()
#change the name of column to someting meaningfull
final_tumore_valume_df_final = final_tumore_valume_df.rename(columns={45:"final tumor volume"})
final_tumore_valume_df_final
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
#outliers = dic(markerface)
labels= ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
flierprops = {'markersize' : 10, 'markerfacecolor' : 'red'}
plt.boxplot([capumulin_volume, ramicane_volume, infubinol_volume, ceftamin_volume], labels= labels, flierprops=flierprops)
plt.ylim(0, 80)
plt.ylabel("final tumor volume (mm3)")
plt.show()
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
#seperate Capomulin data from the rest of datafram
capomulin_df = combined_data.loc[combined_data["Drug Regimen"]=="Capomulin"]
#Randomly select an mouse ID
capomulin_ID = random.choice(capomulin_df['Mouse ID'].unique().tolist())
#seperate the selcetd capomulin ID from the rest of capomulin_df
selected_capomulin_df = combined_data.loc[combined_data["Mouse ID"]==capomulin_ID,:]
#create a plot
plt.plot(selected_capomulin_df['Timepoint'],
selected_capomulin_df['Tumor Volume (mm3)'],
marker = 'o',
color = 'red')
# Give our graph axis labels and title
plt.title(f'Time point versus tumor volume in {capomulin_ID} treated with capomulin' )
plt.ylabel('tumor volume (mm3)')
plt.xlabel('days')
#show chart and set layout
plt.show()
plt.tight_layout()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
#calculate the mean of tumor volume and weight
mean_capomulin_df = capomulin_df.groupby('Mouse ID').mean()
#create the scatter plot
plt.scatter(mean_capomulin_df["Weight (g)"],
mean_capomulin_df["Tumor Volume (mm3)"],
facecolors = "black",
edgecolors = "red",
)
# Give our graph axis labels and title
plt.title("scatter plot of mouse weight versus average tumor volume for the Capomulin regimen")
plt.ylabel('average tumor volume')
plt.xlabel('mouse weight (g)')
#describe the limitation for axis
plt.ylim(mean_capomulin_df['Tumor Volume (mm3)'].min() - 1,
mean_capomulin_df['Tumor Volume (mm3)'].max() + 1)
plt.xlim(mean_capomulin_df['Weight (g)'].min() - 1,
mean_capomulin_df['Weight (g)'].max() + 1)
#show chart and set layout
plt.show()
plt.tight_layout()
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for
#the Capomulin regimen
#creat x and y axis
x_axis = mean_capomulin_df["Weight (g)"]
y_axis = mean_capomulin_df["Tumor Volume (mm3)"]
#calculate inear regression
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_axis, y_axis)
regress_values = x_axis * slope + intercept
#the equation of linier regression
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
#copy the scatter plot code here!
plt.scatter(mean_capomulin_df["Weight (g)"],
mean_capomulin_df["Tumor Volume (mm3)"],
facecolors = "black",
edgecolors = "green",
)
#Line plot of regrresusion
plt.plot(x_axis, regress_values,"r-")
# Give our graph axis labels and title
plt.title("scatter plot of mouse weight versus average tumor volume for the Capomulin regimen")
plt.ylabel('average tumor volume')
plt.xlabel('mouse weight (g)')
#describe the limitation for axis
plt.ylim(mean_capomulin_df['Tumor Volume (mm3)'].min() - 1,
mean_capomulin_df['Tumor Volume (mm3)'].max() + 1)
plt.xlim(mean_capomulin_df['Weight (g)'].min() - 1,
mean_capomulin_df['Weight (g)'].max() + 1)
#write the equation on the scattered plot
plt.annotate(line_eq, (18,36), fontsize=15,)
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
#Identify the appropriate columns to see what columns are the same
print(mouse_metadata.columns)
print(study_results.columns)
# Combine the data into a single dataset
merged_Data = pd.merge(left=mouse_metadata, right=study_results, left_on="Mouse ID", right_on="Mouse ID")
# Display the data table for preview
merged_Data.head()
# Checking the number of mice.
unique_count_mouse = len(merged_Data["Mouse ID"].unique())
data = {'Number of Mice': [unique_count_mouse]}
unique_count_mouse_df = pd.DataFrame(data, columns =["Number of Mice"])
unique_count_mouse_df
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
merged_Data["Mouse ID"].value_counts()
#clean_merged_data=merged_Data.sort_values("Timepoint").drop_duplicates(['Mouse ID'], keep='last')
#clean_merged_data["Mouse ID"].value_counts()
clean_merge_data=merged_Data.drop_duplicates(subset=["Mouse ID", "Timepoint"])
clean_merge_data["Mouse ID"].value_counts()
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
#clean_merge_data1 = clean_merged_data.drop_duplicates(subset=["Mouse ID"])
clean_merge_data.head()
# Checking the number of mice in the clean DataFrame.
new_number_of_mice = len(clean_merge_data["Mouse ID"])
new_number_of_mice
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end
#data_mean = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
#data_median = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()
#data_variance = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()
#data_standard_deviation = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()
#data_sem = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()
#drug_regimen_array = clean_merge_data["Mouse ID"].unique()
#print(drug_regimen_array)
#series_array = [drug_regimen_array, data_mean, data_median, data_variance, data_standard_deviation, data_sem]
#index = pd.MultiIndex.from_arrays(drug_regimen_array, names = ('Drug Regimen'))
#mean = pd.Series(data_mean, index=drug_regimen_array, name="Mean)"
#mean
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
data_mean = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
data_median = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()
data_variance = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()
data_standard_deviation = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()
data_sem = clean_merge_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()
data = {
'Mean': data_mean,
'Median': data_median,
'Variance': data_variance,
'Standard Deviation': data_standard_deviation,
'SEM': data_sem
}
summary_statistics = pd.DataFrame(data, columns = ["Mean", "Median", "Variance", "Standard Deviation", "SEM"])
summary_statistics
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
#create a new dataframe including the drug regimen -> get count and then reset index3
bar_graph = clean_merge_data.groupby(["Drug Regimen"]).count()["Mouse ID"]
bar_graph.plot(kind='bar')
plt.title("No. of Mice for per treatment")
plt.ylabel("No. of Unique Mice Tested")
plt.show()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
#get drug names -> list
drug_names = summary_statistics.index.tolist()
drug_names
#get test_subject_count ->list
test_subject_count = (clean_merge_data.groupby(["Drug Regimen"])["Mouse ID"].count()).tolist()
test_subject_count
#set x-axis = drug names <- use numpy.arange to help space the xaxis https://numpy.org/doc/stable/reference/generated/numpy.arange.html
xaxis = np.arange(len(test_subject_count))
xaxis = drug_names
xaxis
#create the graph
plt.figure(figsize=(len(xaxis),5))
plt.bar(xaxis, test_subject_count)
plt.title("Total number of mice per treatment")
plt.xlabel("Drug Regimen")
plt.ylabel("Test Subject Count")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_data_df = pd.DataFrame(clean_merge_data.groupby(["Sex"]).count()).reset_index()
gender_data_df
#only need 2 values in the dataframe for the pie graph since we only have 2 genders
gender_data_df = gender_data_df[['Sex', 'Mouse ID']]
gender_data_df
#https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.pie.html
plot_pie = gender_data_df.plot.pie(y='Mouse ID', figsize=(2,2))
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
#https://datatofish.com/pie-chart-matplotlib/
my_labels = 'Male', 'Female'
plt.pie(gender_data_df["Mouse ID"], labels= my_labels, autopct='%1.1f%%')
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
! pip install scipy
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
import autopep8
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
data_combined_df=pd.merge(mouse_metadata, study_results, how="inner", on="Mouse ID")
# Display the data table for preview
data_combined_df.head()
# Checking the number of mice.
mice = data_combined_df['Mouse ID'].value_counts()
numberofmice = len(mice)
numberofmice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicatemice = data_combined_df.loc[data_combined_df.duplicated(subset=['Mouse ID', 'Timepoint',]),'Mouse ID'].unique()
print (duplicatemice)
# Optional: Get all the data for the duplicate mouse ID.
duplicate_g989 = data_combined_df[data_combined_df.duplicated(['Mouse ID', 'Timepoint'])]
duplicate_g989
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = data_combined_df[data_combined_df['Mouse ID'].isin(duplicatemice)==False]
clean_df
# Checking the number of mice in the clean DataFrame.
cleanmice = clean_df['Mouse ID'].value_counts()
numberofcleanmice = len(cleanmice)
numberofcleanmice
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# mean = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
# # print (mean)
# median = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()
# # print (median)
# variance = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()
# # print (variance)
# std_dv = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()
# # print (std_dv)
# sem = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()
# # print (sem)
# summary_df = pd.DataFrame({"Mean": mean, "Median": median,
# "Variance": variance, "Standard Deviation": standard_dv, "SEM": sem})
# summary_df
# # This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
single_group_by = clean_df.groupby('Drug Regimen')
summary_df_2 = single_group_by.agg(['mean','median','var','std','sem'])["Tumor Volume (mm3)"]
summary_df_2
# This method produces everything in a single groupby function
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
micepertreatment = clean_df.groupby(["Drug Regimen"]).count()["Mouse ID"]
# micepertreatment
plot_pandas = micepertreatment.plot.bar(figsize=(15,10), color='g',fontsize = 12)
plt.xlabel("Drug Regimen",fontsize = 16)
plt.ylabel("Number of Mice",fontsize = 16)
plt.title("Total Number of Mice per Treatment",fontsize = 20)
plt.show()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
x_axis = summary_df.index.tolist()
#x_axis
y_axis = micepertreatment.tolist()
#y_axis
tick_locations = []
for x in x_axis:
tick_locations.append(x)
plt.xlim(-.75, len(x_axis)-.25)
plt.ylim(0, max(y_axis) + 10)
plt.xlabel("Drug Regimen",fontsize = 16)
plt.ylabel("Number of Mice",fontsize = 16)
plt.title("Total Number of Mice per Treatment",fontsize = 18)
plt.bar(x_axis, y_axis, color='b', alpha=.75, align="center")
plt.xticks(tick_locations, x_axis,rotation=90)
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_data_df = pd.merge(mouse_metadata, study_results, how='outer', on='Mouse ID')
# Display the data table for preview
combined_data_df
# Checking the number of mice.
num_mice = combined_data_df["Mouse ID"].index()
num_mice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mouseid_timepoint =combined_data_df[combined_data_df.duplicated(['Mouse ID', 'Timepoint'])]['Mouse ID']
duplicate_mouseid_timepoint
# Optional: Get all the data for the duplicate mouse ID.
duplicate_mouse_id =pd.DataFrame(duplicate_mouseid_timepoint)
duplicate_mouse_id
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_data_df =combined_data_df
# Checking the number of mice in the clean DataFrame.
num_of_mice = clean_data_df['Mouse ID'].value_counts()
num_of_mice
clean_data_df.columns
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
mean=clean_data_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
mean
summary=clean_data_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].describe()
summary
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
var
sd
sem
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_df = pd.merge(mouse_metadata, study_results, how="inner", on="Mouse ID")
# Display the data table for preview
combined_df
# Checking the number of mice.
mouse_count = combined_df["Mouse ID"].count()
mouse_count
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_rows = combined_df[combined_df.duplicated(['Mouse ID', 'Timepoint'])]
duplicate_rows
# Optional: Get all the data for the duplicate mouse ID.
all_duplicate_rows = combined_df[combined_df.duplicated(['Mouse ID',])]
all_duplicate_rows
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = combined_df.drop_duplicates("Mouse ID")
clean_df
# Checking the number of mice in the clean DataFrame.
mouse_count = clean_df["Mouse ID"].count()
mouse_count
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
mean = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
median = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()
variance = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()
standard_dv = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()
sem = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()
# Assemble the resulting series into a single summary dataframe.
summary_df = pd.DataFrame({"Mean": mean, "Median": median, "Variance": variance, "Standard Deviation": standard_dv, "SEM": sem})
summary_df
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
summary_df = pd.DataFrame({"Mean": mean, "Median": median, "Variance": variance, "Standard Deviation": standard_dv, "SEM": sem})
summary_df
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
drug_data = pd.DataFrame(combined_df.groupby(["Drug Regimen"]).count()).reset_index()
#Alter the dataframe down to two columns
drugs_df = drug_data[["Drug Regimen", "Mouse ID"]]
drugs_df = drugs_df.set_index("Drug Regimen")
#Creating the bar chart
drugs_df.plot(kind="bar", figsize=(10,3))
plt.title("Drug Treatment Count")
plt.show()
plt.tight_layout()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
drug_list = summary_df.index.tolist()
drug_list
#Turn drug_count into a list
drug_count = (combined_df.groupby(["Drug Regimen"])["Age_months"].count()).tolist()
drug_count
x_axis = np.arange(len(drug_count))
#Assign x-axis
x_axis = drug_list
#Creating and customizing bar chart
plt.figure(figsize=(11,4))
plt.bar(x_axis, drug_count, color='b', alpha=0.5, align="center")
plt.title("Drug Treatment Count")
plt.xlabel("Drug Regimen")
plt.ylabel("Count")
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_df = pd.DataFrame(combined_df.groupby(["Sex"]).count()).reset_index()
gender_df.head()
#Alter the dataframe down to two columns
gender_df = gender_df[["Sex","Mouse ID"]]
gender_df.head()
#Configuration of actual plot
plt.figure(figsize=(12,6))
ax1 = plt.subplot(121, aspect="equal")
gender_df.plot(kind="pie", y = "Mouse ID", ax=ax1, autopct='%1.1f%%',
startangle=190, shadow=True, labels=gender_df["Sex"], legend = False, fontsize=14)
plt.title("Male & Female Mice Percentage")
plt.xlabel("")
plt.ylabel("")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
gender_count = (combined_df.groupby(["Sex"])["Age_months"].count()).tolist()
gender_count
#Adding details to the pie chart
labels = ["Females", "Males"]
colors = ["purple", "orange"]
explode = (0.1, 0)
#Creating the pie chart
plt.pie(gender_count, explode=explode, labels=labels, colors=colors, autopct="%1.1f%%", shadow=True, startangle=160)
plt.axis("equal")
#Clears for next plot
plt.clf()
plt.cla()
plt.close()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
combined_df.head()
sorted_df = combined_df.sort_values(["Drug Regimen", "Mouse ID", "Timepoint"], ascending=True)
last_df = sorted_df.loc[sorted_df["Timepoint"] == 45]
last_df.head().reset_index()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
#Make column "Tumor Volume (mm3)" a dataframe object
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
capo_df = last_df[last_df["Drug Regimen"].isin(["Capomulin"])]
capo_df.head().reset_index()
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
capo_obj = capo_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
capo_obj = capo_obj["Tumor Volume (mm3)"]
capo_obj
# Calculate the IQR and quantitatively determine if there are any potential outliers.
quartiles = capo_obj.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The median of temperatures is: {quartiles[0.5]}")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
outlier_tumor_volumes = capo_df.loc[(capo_df['Tumor Volume (mm3)'] < lower_bound) | (capo_df['Tumor Volume (mm3)'] > upper_bound)]
outlier_tumor_volumes
print(f"The minimum Tumor Volume (mm3) of the potential outliers is {outlier_tumor_volumes['Tumor Volume (mm3)'].min()}")
print(f"The maximum Tumor Volume (mm3) of the potential outliers is {outlier_tumor_volumes['Tumor Volume (mm3)'].max()}")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig1, ax1 = plt.subplots()
ax1.set_title("Final Tumor Volume in Capomulin Regimen")
ax1.set_ylabel("Final Tumor Volume (mm3)")
ax1.boxplot(capo_obj)
plt.show()
#Grab data from "Ramicane" and reset index
ram_df = last_df[last_df["Drug Regimen"].isin(["Ramicane"])]
ram_df.head().reset_index()
#Make column "Tumor Volume (mm3)" a dataframe object
ram_obj = ram_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
ram_obj = ram_obj["Tumor Volume (mm3)"]
ram_obj
# If the data is in a dataframe, we use pandas to give quartile calculations
quartiles = ram_obj.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The median of temperatures is: {quartiles[0.5]}")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig1, ax1 = plt.subplots()
ax1.set_title("Final Tumor Volume in Ramicane Regimen")
ax1.set_ylabel("Final Tumor Volume (mm3)")
ax1.boxplot(ram_obj)
plt.show()
#Grab data from "Infubinol" and reset index
infu_df = last_df[last_df["Drug Regimen"].isin(["Infubinol"])]
infu_df.head().reset_index()
#Make column "Tumor Volume (mm3)" a dataframe object
infu_obj = infu_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
infu_obj = infu_obj["Tumor Volume (mm3)"]
infu_obj
# If the data is in a dataframe, we use pandas to give quartile calculations
quartiles = infu_obj.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The median of temperatures is: {quartiles[0.5]}")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig1, ax1 = plt.subplots()
ax1.set_title("Final Tumor Volume in Infubinol Regimen")
ax1.set_ylabel("Final Tumor Volume (mm3)")
ax1.boxplot(infu_obj)
plt.show()
#Grab data from "Ceftamin" and reset index
ceft_df = last_df[last_df["Drug Regimen"].isin(["Ceftamin"])]
ceft_df.head().reset_index()
#Make column "Tumor Volume (mm3)" a dataframe object
ceft_obj = ceft_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
ceft_obj = ceft_obj["Tumor Volume (mm3)"]
ceft_obj
# If the data is in a dataframe, we use pandas to give quartile calculations
quartiles = ceft_obj.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The median of temperatures is: {quartiles[0.5]}")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig1, ax1 = plt.subplots()
ax1.set_title("Final Tumor Volume in Ceftamin Regimen")
ax1.set_ylabel("Final Tumor Volume (mm3)")
ax1.boxplot(ceft_obj)
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
capomulin_df = combined_df.loc[combined_df["Drug Regimen"] == "Capomulin"]
capomulin_df = capomulin_df.reset_index()
capomulin_df.head()
# Grab data from one mouse
capo_mouse = capomulin_df.loc[capomulin_df["Mouse ID"] == "s185"]
capo_mouse
#Arrange data into two columns
capo_mouse = capo_mouse.loc[:, ["Timepoint", "Tumor Volume (mm3)"]]
#Now reset the index and generate a line plot showing the tumor volume for mice treated with Capomulin
capo_mouse = capo_mouse.reset_index(drop=True)
capo_mouse.set_index("Timepoint").plot(figsize=(10,8), linewidth=2.5, color="blue")
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
capomulin_df.head()
#Arrange data into 3 columns
weight_df = capomulin_df.loc[:, ["Mouse ID", "Weight (g)", "Tumor Volume (mm3)"]]
weight_df.head()
# Get the average tumor volume for each mouse under the use of Capomulin
avg_capo = pd.DataFrame(weight_df.groupby(["Mouse ID", "Weight (g)"])["Tumor Volume (mm3)"].mean()).reset_index()
avg_capo.head()
#Rename "Tumor Volume (mm3)" column to "Average Volume"
avg_capo = avg_capo.rename(columns={"Tumor Volume (mm3)": "Average Volume"})
avg_capo.head()
#Creating the scatter plot of mouse wight compared to the average tumor volume for Capomulin
avg_capo.plot(kind="scatter", x="Weight (g)", y="Average Volume", grid=True, figsize=(4,4), title="Weight vs. Average Tumor Volume")
plt.show()
#Clears for next plot
plt.clf()
plt.cla()
plt.close()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
mouse_weight = avg_capo.iloc[:,1]
avg_tumor_volume = avg_capo.iloc[:,2]
# We then compute the Pearson correlation coefficient between "Mouse Weight" and "Average Tumor Volume"
correlation = st.pearsonr(mouse_weight,avg_tumor_volume)
print(f"The correlation between both factors is {round(correlation[0],2)}")
# import linregress
from scipy.stats import linregress
# Add the lineear regression equation and line to the scatter plot
x_values = avg_capo["Weight (g)"]
y_values = avg_capo["Average Volume"]
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values, y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.xlabel("Mouse Weight")
plt.ylabel("Average Tumor Volume")
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
output = pd.merge(mouse_metadata,study_results,
on='Mouse ID',
how='outer')
# Display the data table for preview
output
# Checking the number of mice.
number_mice = output['Mouse ID'].count()
mice_number = pd.DataFrame({"Number of Mice": [number_mice]}, index=[0])
mice_number
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
#output[df.duplicated()]
dup_rows = output[output.duplicated(['Mouse ID', 'Timepoint'])]
dup_rows
# Optional: Get all the data for the duplicate mouse ID.
all_duplicate_mouse = output[output.duplicated(['Mouse ID'])]
all_duplicate_mouse
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
#apply drop duplicates to get clean data, use drop duplicates for a clean data
dropping_duplicates = output.drop_duplicates('Mouse ID')
dropping_duplicates
# Checking the number of mice in the clean DataFrame.
clean_data = dropping_duplicates['Mouse ID'].count()
clean_data = pd.DataFrame({"Number of Mice": [clean_data]}, index=[0])
clean_data
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
summary_reg = pd.DataFrame(output.groupby("Drug Regimen").count())
summary_reg ["Mean"] = pd.DataFrame(output.groupby("Drug Regimen")["Tumor Volume (mm3)"].mean())
summary_reg ["Median"] = pd.DataFrame(output.groupby("Drug Regimen")["Tumor Volume (mm3)"].median())
summary_reg ["Variance"] = pd.DataFrame(output.groupby("Drug Regimen")["Tumor Volume (mm3)"].var())
summary_reg ["Standard Deviation"] = pd.DataFrame(output.groupby("Drug Regimen")["Tumor Volume (mm3)"].std())
summary_reg ["SEM"] = pd.DataFrame(output.groupby("Drug Regimen")["Tumor Volume (mm3)"].sem())
# Assemble the resulting series into a single summary dataframe.
summary_reg = summary_reg[["Mean","Median","Variance","Standard Deviation","SEM"]]
summary_reg.head()
# Generate a summary statistics table of mean, median, variance, standard deviation,and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
#summary_reg = summary_reg[["Mean","Median","Variance","Standard Deviation","SEM"]]
regg_regi = output.groupby('Drug Regimen')['Tumor Volume (mm3)'].agg(['mean','median','var','std','sem'])
regg_regi
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
total_measurements = output.groupby("Drug Regimen").agg({"Timepoint":'count'})
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
total_measurements.plot(kind="bar", figsize=(10,3))
plt.title("Number of Measurements by Drug Regimen")
plt.show()
plt.tight_layout()
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_distribution = output.groupby(["Mouse ID","Sex"])
#group them by size
gender_distribution = pd.DataFrame(gender_distribution.size())
#begin breaking down the dataframe/rename by female and male counts
gender_summary = pd.DataFrame(gender_distribution.groupby(["Sex"]).count())
gender_summary.columns = ["Total Count"]
#create the percentage by dividing for the pie plot
gender_summary["Distribution of Mice by Gender"] = (100*(gender_summary["Total Count"]/gender_summary["Total Count"].sum()))
#plot the pie chart
explode = (0.1,0)
colors = ['pink','brown']
plot = gender_summary.plot.pie(y='Total Count',
figsize=(6,6),
colors=colors,
startangle=140,
explode = explode,
shadow = True,
autopct="%1.1f%%")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
sex_distribution = (output.groupby(["Sex"])["Mouse ID"].count())
labels = ["Female","Males"]
colors = ["Pink","brown"]
explode = (0.1,0)
plt.pie(sex_distribution,
explode=explode,
labels=labels,
colors=colors,
autopct="%1.1f%%",
shadow=True,
startangle=160)
plt.axis("equal")
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
treatment_regimens = output[output["Drug Regimen"].isin(["Capomulin", "Ramicane", "Infubinol", "Ceftamin"])]
# Start by getting the last (greatest) timepoint for each mouse
treatment_regimens = treatment_regimens.sort_values(["Timepoint"],ascending = True)
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
treatment_summary = treatment_regimens[["Drug Regimen", "Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
treatment_summary
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
line_cap = output.loc[(output["Drug Regimen"] == "Capomulin"),:]
line_cap = line_cap.reset_index()
#select only one mouse
one_mouse = line_cap .loc[line_cap["Mouse ID"] == "s185"]
time_point = one_mouse["Timepoint"]
tumor_line = one_mouse["Tumor Volume (mm3)"]
tumor_line = plt.plot(time_point,tumor_line)
plt.xlabel('Timepoint')
plt.ylabel('Tumor Volume')
plt.title('Tumor Volume of mice on Capomulin')
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
# Pull values for x and y values
mice_weight = line_cap.groupby(line_cap["Mouse ID"])["Weight (g)"].mean()
tumor_vol = line_cap.groupby(line_cap["Mouse ID"])["Tumor Volume (mm3)"].mean()
#plot the values
plt.scatter(mice_weight, tumor_vol)
plt.xlabel("Weight of Mouse")
plt.ylabel("Tumor Volume")
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
mice_weight = line_cap.groupby(line_cap["Mouse ID"])["Weight (g)"].mean()
tumor_vol = line_cap.groupby(line_cap["Mouse ID"])["Tumor Volume (mm3)"].mean()
slope, int, r, p, std_err = st.linregress(mice_weight,tumor_vol)
fit = slope * mice_weight + int
#plot the linear regression model
plt.scatter(mice_weight,tumor_vol)
plt.xlabel("Weight of Mouse")
plt.ylabel("Tumor Volume")
plt.plot(mice_weight,fit,"--")
plt.xticks(mice_weight, rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights As a conclusion of the python matplotlib challenge, I can say that the capomulin treatment was the most efficient for the tumor treatments in the mice. The majority of the mice tested were males. They were no outliers. When the doses of the drug is increase, the size of the tumor is reduce. Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
combined_study_data=pd.merge(study_results,mouse_metadata,how='outer', on="Mouse ID")
combined_study_data.head(20)
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
#Create inital summary table with Drug Regimes and counts
tumor_data = pd.DataFrame(combined_study_data.groupby("Drug Regimen").count())
tumor_data["Mean"] = pd.DataFrame(combined_study_data.groupby("Drug Regimen")["Tumor Volume (mm3)"].mean())
tumor_data["Median"] = pd.DataFrame(combined_study_data.groupby("Drug Regimen")["Tumor Volume (mm3)"].median())
tumor_data["Standard Deviation"] = pd.DataFrame(combined_study_data.groupby("Drug Regimen")["Tumor Volume (mm3)"].std())
tumor_data["Variance"] = pd.DataFrame(combined_study_data.groupby("Drug Regimen")["Tumor Volume (mm3)"].var())
tumor_data["SEM"] = pd.DataFrame(combined_study_data.groupby("Drug Regimen")["Tumor Volume (mm3)"].sem())
tumor_data = tumor_data[[ "Mouse ID","Mean", "Median", "Standard Deviation", "Variance", "SEM"]]
tumor_data.head()
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
tumor_data = tumor_data[["Mouse ID"]]
tumor_data.plot(kind="bar", figsize=(6,4), color = "r", legend=False)
plt.title("Trials per Drug Regime")
plt.show()
plt.tight_layout()
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
x_axis = np.arange(len(tumor_data))
tick_locations = [value for value in x_axis]
plt.figure(figsize=(6,4))
plt.bar(x_axis, tumor_data["Mouse ID"], color = "r", width = .5)
plt.xticks(tick_locations, tumor_data.index.values, rotation="vertical")
plt.title("Trials per Drug Regime")
plt.xlim(-0.75, len(x_axis)-.25)
plt.ylim(0, max(tumor_data["Mouse ID"])+10)
###Output
_____no_output_____
###Markdown
Pie plots
###Code
combined_study_data.head()
# Generate a pie plot showing the distribution of female versus male mice using pandas
combined_study_data.groupby("Sex")["Sex"].count()
explode=(0.01,0)
sexPie=combined_study_data.groupby("Sex")['Sex'].count().sort_index(ascending=False)
sexPie.plot(kind='pie', explode=explode, autopct="%1.1f%%", shadow=True)
plt.title("Distribution Of Female Vs male")
plt.savefig("Pandas Pie.png")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(sexPie, explode=explode, autopct="%1.1f%%", shadow=True, labels=sexPie.index)
plt.title("Distribution Of Female Vs male")
plt.savefig("Pyplot Pie.png")
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
tumor_number = combined_study_data.loc[(combined_study_data["Drug Regimen"]=="Capomulin") | (combined_study_data["Drug Regimen"] == "Ramicane") | (combined_study_data["Drug Regimen"] == "Infubinol") | (combined_study_data["Drug Regimen"] == "Ceftamin"), :]
tumor_number = tumor_number.sort_values("Timepoint", ascending= False)
tumor_number = tumor_number.drop_duplicates(subset="Mouse ID", keep='first')
quartiles = tumor_number['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
tumor_data = tumor_number.loc[(tumor_number['Tumor Volume (mm3)'] > upper_bound) | (tumor_number['Tumor Volume (mm3)'] < lower_bound), :]
tumor_data.head()
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
tumor_volume= tumor_number['Tumor Volume (mm3)']
fig1, ax1 = plt.subplots()
ax1.set_title('Tumor Volume of each Mouse')
ax1.set_ylabel('Tumor Volume (mm3)')
ax1.boxplot(tumor_volume)
plt.show()
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
capomulin_data = combined_study_data.loc[(combined_study_data["Drug Regimen"]== "Capomulin") & (combined_study_data["Mouse ID"]== "b128"),:]
timepoint = capomulin_data["Timepoint"]
tumor_volume = capomulin_data["Tumor Volume (mm3)"]
plt.title("Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin")
plt.xlabel("Tumor Volume (mm3)")
plt.ylabel("Timepoint")
tumor_value_data= plt.plot(timepoint, tumor_volume)
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_result = combined_study_data.loc[(combined_study_data ["Drug Regimen"] == "Capomulin"), :]
mouse_weight = capomulin_result.groupby(combined_study_data["Mouse ID"])["Weight (g)"].mean()
tumor_volume = capomulin_result.groupby(combined_study_data["Mouse ID"])["Tumor Volume (mm3)"].mean()
plt.scatter(mouse_weight,tumor_volume)
plt.xlabel("Mouse's weight(g)")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Mouse weight Vs Average Tumor Volume (mm3) For The Capomulin Regimen")
plt.show()
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
mouse_weight = capomulin_result.groupby(capomulin_result["Mouse ID"])["Weight (g)"].mean()
tumor_volume = capomulin_result.groupby(capomulin_result["Mouse ID"])["Tumor Volume (mm3)"].mean()
slope, intercept, rvalue, pvalue, stderr = st.linregress(mouse_weight, tumor_volume)
line = slope * mouse_weight + intercept
plt.scatter(mouse_weight,tumor_volume)
plt.title("Correlation Coefficient And Linear Regression Model For Mouse Weight And Average Tumor Volume For The Capomulin Regimen")
plt.xlabel("Mouse's weight (g)")
plt.ylabel("Tumor Volume (mm3)")
plt.plot(mouse_weight,line,"--")
plt.xticks(mouse_weight, rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
The purpose of this study was to compare the performance of Pymaceuticals' drug of interest, Capomulin, versus the other treatment regimens. Observations and Insights (Review) The most effective drug is Capomulin. This drug has a capacity to decrease tumor volume and limit metastatic spread. Tumor Vol. vs TimePoint: As soon as you started medicine you can see the tumor volume went down. But i do also have some reservation from 35 - 40 timepoint where i do see an increase. But this is the most effective treatment. Mice were also treated with Infubinol had less tumor growth. ![Lab.png](attachment:Lab.png)
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combine_df = pd.merge(mouse_metadata, study_results, on="Mouse ID")
# Display the data table for preview
combine_df.head()
# Checking the number of mice.
combine_df["Mouse ID"].nunique()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
combine_df[combine_df.duplicated(['Mouse ID', 'Timepoint'])]
# Optional: Get all the data for the duplicate mouse ID.
combine_df[combine_df['Mouse ID'] == 'g989']
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean = combine_df[combine_df['Mouse ID'] != 'g989']
# Checking the number of mice in the clean DataFrame.
clean['Mouse ID'].nunique()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume
# for each regimen
grp = clean.groupby('Drug Regimen')['Tumor Volume (mm3)']
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
pd.DataFrame({
'mean': grp.mean(),
'median': grp.median(),
'var': grp.var(),
'std': grp.std(),
'sem': grp.sem()
})
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
grp.agg(['mean','median','var','std','sem'])
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
grp.sum().plot(kind='bar',rot=45,title='Total Measurements')
plt.show()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
plt.bar(grp.sum().index, grp.sum())
plt.xticks(rotation=45)
plt.title('Total Measurements')
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
pieChart = combine_df.groupby('Sex')['Mouse ID'].count()
pieChart.plot(kind='pie', autopct = '%1.1f%%', shadow = True, startangle=45,explode=(0.05,0))
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(pieChart, labels=["Male", "Female"], colors = ["yellow", "gray"], autopct="%1.1f%%", shadow=True, explode=(.05,0))
plt.title("Male vs. Female Mice Distribution", y=1.02, fontsize=15);
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights Inferences that can be made from the data- Capomulin and Ramicane are the most effective Drug Regimens for treatment of cancer in mice.- Only Infubinol shows one possible outlier, so in general, the most promising treatment regimens behave consistently.- The Pearson correlation coefficient between Mouse Weight and Average Tumor Volume for the Capomulin Treatment, $r=0.8419363424694719$, is high so it indicates that the data points are not too far from the line of best fit, that is, as the value of the weight variable increases, so does the value of the tumor volume variable.- The Linear Regression Model for Mouse Weight and Average Tumor Volume for the Capomulin Treatment generates the Line Equation $y = 0.95 x + 21.55$ with the $r^2 = 0.7088568047708719$.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
from sklearn import datasets
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
study_data_complete = pd.merge(mouse_metadata, study_results, on="Mouse ID")
# Display the data table for preview
study_data_complete
# Checking the number of unique mice.
print(f'Number of unique mice: {len(study_data_complete["Mouse ID"].unique())}')
# Checking the number of measurements.
print(f'Number of measurements: {len(study_data_complete)}')
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mice = study_data_complete["Mouse ID"][study_data_complete.duplicated(subset=['Mouse ID', 'Timepoint'])]
duplicate_mice
# Optional: Get all the data for the duplicate mouse ID.
duplicate_data = study_data_complete[study_data_complete.duplicated(subset=['Mouse ID', 'Timepoint'])]
duplicate_data
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_data = study_data_complete.drop_duplicates(subset=['Mouse ID', 'Timepoint'], keep="first")
clean_data
# Checking the number of unique mice in the clean DataFrame:
print(f'Number of unique mice: {len(clean_data["Mouse ID"].unique())}')
# Checking the number of measurements:
print(f'Number of measurements: {len(clean_data)}')
###Output
Number of unique mice: 249
Number of measurements: 1888
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
grouped_data = clean_data.groupby("Drug Regimen")
# mean, median, variance, standard deviation, and SEM of the tumor volume.
average_tv = grouped_data["Tumor Volume (mm3)"].mean()
median_tv = grouped_data["Tumor Volume (mm3)"].median()
var_tv = grouped_data["Tumor Volume (mm3)"].var()
stddev_tv = grouped_data["Tumor Volume (mm3)"].std()
sem_tv = grouped_data["Tumor Volume (mm3)"].sem()
# Assemble the resulting series into a single summary dataframe.
summary_df = pd.DataFrame({"Mean": average_tv,
"Median": median_tv,
"Variance": var_tv,
"Std Dev": stddev_tv,
"SEM": sem_tv})
summary_df
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
# Group by and aggregate functions
another_summary_df = grouped_data.agg({"Tumor Volume (mm3)":["mean","median","var","std","sem"]})
another_summary_df
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
# Count how many times each Drug Regimen appears in our group
count_measurements = grouped_data['Drug Regimen'].size()
# Create a bar chart based off of the group series from before
count_chart = count_measurements.plot(kind='bar', color='blue')
# Set the title, xlabel and ylabel using class methods
count_chart.set_title('Total number of Measurements taken on each Drug Regimen\n')
count_chart.set_xlabel("Drug Regimen")
res = count_chart.set_ylabel("Measurements")
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
# Reset the index for this DataFrame so "Drug Regimen" is a column
grouped_data_default_index = count_measurements.reset_index(name='count')
# Get columns
measurements = grouped_data_default_index['count']
drug_regimen = grouped_data_default_index['Drug Regimen']
plt.xticks(np.arange(len(drug_regimen)), drug_regimen, rotation='vertical')
# Set the title, xlabel and ylabel using class methods
plt.title('Total number of Measurements taken on each Drug Regimen\n')
plt.xlabel("Drug Regimen")
plt.ylabel("Measurements")
# use pandas to create a bar chart from the dataframe
res = plt.bar(drug_regimen, measurements, width=0.5, color='blue')
# Generate a pie plot showing the distribution of female versus male mice using pandas
# We are looking for 'Sex' value in all measurements, that is, in the complete dataset
# Create a group based on the values in the 'Sex' column
sex_group = clean_data.groupby('Sex')
# Count how many times each sex appears in our group
count_sex = sex_group['Sex'].count()
# Create a bar chart based off of the group series from before
count_chart = count_sex.plot(kind='pie', autopct='%1.1f%%', ylabel ='', shadow=True)
# Set the title, xlabel and ylabel using class methods
res = count_chart.set_title('Distribution of Female vs Male mice in the study')
# Generate a pie plot showing the distribution of female versus male mice using pyplot
# Reset the index for this DataFrame so "Sex" is a column
grouped_data_default_index = count_sex.reset_index(name='count')
# Get columns
count = grouped_data_default_index['count']
sex = grouped_data_default_index['Sex']
# Set the title, xlabel and ylabel using class methods
plt.title('Distribution of Female vs Male mice in the study')
# Create a pie plot based off of the group series from before
res = plt.pie(count, labels=sex, autopct='%1.1f%%', shadow=True)
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
regimens_data = clean_data.set_index('Drug Regimen').loc[['Capomulin','Ramicane','Infubinol','Ceftamin'],
['Mouse ID','Timepoint','Tumor Volume (mm3)']]
regimens_data.reset_index(inplace=True)
# Start by getting the last (greatest) timepoint for each mouse
idx = regimens_data.groupby(['Mouse ID'])['Timepoint'].transform(max) == regimens_data['Timepoint']
last_timepoint = regimens_data[idx]
final_tv = last_timepoint.rename(columns={"Tumor Volume (mm3)": "Final Tumor Volume"})
print(final_tv)
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
merged_df = pd.merge(clean_data, final_tv[['Mouse ID','Final Tumor Volume']], on=["Mouse ID"])
merged_df
# Put treatments into a list for for loop (and later for plot labels)
treatments = ['Capomulin','Ramicane','Infubinol','Ceftamin']
# Create empty list to fill with tumor vol data (for plotting)
tumor_vol_data = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for treatment in treatments:
# Locate the rows which contain mice on each drug and get the tumor volumes
subset = final_tv.loc[(final_tv['Drug Regimen'] == treatment),['Final Tumor Volume']]['Final Tumor Volume']
tumor_vol_data.append(subset)
# Determine outliers using upper and lower bounds
quartiles = subset.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"\nTreatment {treatment}:\n")
print(subset.describe())
print(f"\nThe lower quartile of tumor volume is: {lowerq}")
print(f"The upper quartile of tumor volume is: {upperq}")
print(f"The interquartile range (IQR) of tumor volume is: {iqr}")
print(f"The the median of tumor volume is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"\nValues below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Show possible outliers (if any)
outlier_tumor_vol = final_tv.loc[(final_tv['Drug Regimen'] == treatment) &
((final_tv['Final Tumor Volume'] < lower_bound) |
(final_tv['Final Tumor Volume'] > upper_bound))]
if len(outlier_tumor_vol)>0:
print('These might be some outliers for this treatment:')
print(outlier_tumor_vol)
else:
print('Most likely, there are no outliers for this treatment!')
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig1, ax1 = plt.subplots()
ax1.set_title('Final Tumor Volume across four regimens of interest\n')
ax1.set_ylabel('Final Tumor Volume of each mouse')
ax1.boxplot(x=tumor_vol_data, labels=treatments)
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
mouse_id = "m601"
capomulin_mouse = merged_df.loc[merged_df['Mouse ID']==mouse_id].sort_values("Timepoint")
# Set the title, xlabel and ylabel
plt.title(f'Tumor Volume vs Time Point \nfor mouse "{mouse_id}" treated with Capomulin\n')
plt.ylabel("Tumor Volume (mm3)")
plt.xlabel("Timepoint")
# Plot
values, = plt.plot(capomulin_mouse['Timepoint'], capomulin_mouse['Tumor Volume (mm3)'],
marker="s", color="Red", linewidth=1)
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
# Find Average Tumor Volume
capomulin_set = merged_df.loc[merged_df['Drug Regimen']=='Capomulin']
avg_capomulin = capomulin_set.groupby("Mouse ID").mean().reset_index()
# Assign x and y values
x_values = avg_capomulin['Weight (g)']
y_values = avg_capomulin['Tumor Volume (mm3)']
# Set the title, xlabel and ylabel
plt.title('Average Tumor Volume vs \nMouse Weight for the Capomulin regimen\n')
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
# Plot
plt.scatter(x_values, y_values)
clb = plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient for mouse weight and average tumor volume for the Capomulin regimen
correlation = st.pearsonr(x_values,y_values)
print(f"The correlation between Mouse Weight and Average Tumor Volume for the Capomulin regimen is {round(correlation[0],2)}")
# Calculate the linear regression model for mouse weight and average tumor volume for the Capomulin regimen
# Calculate the line equation using linear regression function
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
# Plot original data using Scatter type
plt.scatter(x_values,y_values, label='original data')
clb = plt.colorbar()
# Plot fitted line using Line type
plt.plot(x_values,regress_values,"r-", label='fitted line')
# Add line equation to plot
plt.annotate(line_eq,(19,36),fontsize=15,color="red")
# Set title, xlabel, ylabel and legend
plt.title('Linear Regression Model for Mouse Weight and \nAverage Tumor Volume for Capomulin regimen\n')
plt.xlabel('Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.legend()
# Show r-squared value
r_value = f'r={round(rvalue**2,4)}'
plt.annotate(r_value,(21,38),fontsize=15,color="green")
# Show plot
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pandas.
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_data_files_df = mouse_metadata.merge(study_results, how="left", on="Mouse ID", sort=False)
# Display the data table for preview
combined_data_files_df
# Checking the number of mice.
mouse_count = combined_data_files_df["Mouse ID"].count()
mouse_count
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mouse_id = combined_data_files_df[combined_data_files_df.duplicated(["Mouse ID", "Timepoint"])]
duplicate_mouse_id
# Optional: Get all the data for the duplicate mouse ID.
all_duplicate_data = combined_data_files_df[combined_data_files_df.duplicated(["Mouse ID"])]
all_duplicate_data
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_combined_data_df = combined_data_files_df.drop_duplicates("Mouse ID")
clean_combined_data_df
# Checking the number of mice in the clean DataFrame.
mouse_count = clean_combined_data_df["Mouse ID"].count()
mouse_count
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
mean = combined_data_files_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].mean()
median = combined_data_files_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].median()
variance = combined_data_files_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].var()
standard_deviation = combined_data_files_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].std()
sem = combined_data_files_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].sem()
summary_statistics_df = pd.DataFrame({"Mean":mean, "Median":median, "Variance":variance, "Standard Deviation":standard_deviation, "SEM":sem})
summary_statistics_df
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
drug_regimen = combined_data_files_df.groupby(["Drug Regimen"]).count()["Mouse ID"]
#Created Chart
drug_regimen.plot(kind="bar", figsize=(10,4))
plt.title("Drug Intake Analysis")
plt.show()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
#Drug regimen list
drug_regimen = summary_statistics_df.index.tolist()
drug_regimen
#Counting drug regimen for x-axis
drug_count = (combined_data_files_df.groupby(["Drug Regimen"])["Age_months"].count()).tolist()
drug_count
x_axis = np.arange(len(drug_count))
#Assigning drug regimen to the x-axis
x_axis= drug_regimen
#Creating bar chart
plt.figure(figsize=(10,4))
plt.bar(x_axis, drug_count, color='c', alpha=0.5, align="center")
plt.title("Drug Intake Analysis")
plt.xlabel("Drug Regimen")
plt.ylabel("Count")
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_df = combined_data_files_df.groupby(["Mouse ID", "Sex"])
gender_df
gender_df = pd.DataFrame(gender_df.size())
#Total male and female mice in created dataframe
gender_df = pd.DataFrame(gender_df.groupby(["Sex"]).count())
gender_df.columns = ["Total Count"]
#Pie chart with percentage formatting
chart = gender_df.plot.pie(y="Total Count", figsize=(5,5), startangle=140, shadow = True, autopct="%1.1f%%")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
gender_count = (combined_data_files_df.groupby(["Sex"])["Age_months"].count()).tolist()
gender_count
#Labels for pie chart
labels = ["Females", "Males"]
#Color change
colors= ["pink", "brown"]
plt.pie(gender_count, labels=labels, colors=colors, autopct="%1.1f%%", startangle=140)
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
combined_data_files_df.head()
#Sorting data by Drug Regimen, Mouse ID, and Timepoint.
sorted_data_df = combined_data_files_df.sort_values(["Drug Regimen", "Mouse ID", "Timepoint"], ascending=True)
final_df = sorted_data_df.loc[sorted_data_df["Timepoint"]==45]
final_df.head()
#Capomulin data
capomulin_data_df = final_df[final_df["Drug Regimen"].isin(["Capomulin"])]
capomulin_data_df.head()
#Make Tumor Volume a dataframe object
capomulin_data_object = capomulin_data_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
capomulin_data_object = capomulin_data_object["Tumor Volume (mm3)"]
capomulin_data_object
#Quartile and Outlier calculations for the drug "Capomulin".
quartiles = capomulin_data_object.quantile([.25,.5,.75])
lower_quartiles = quartiles[0.25]
upper_quartiles = quartiles[0.75]
IQR = upper_quartiles - lower_quartiles
low_bound = lower_quartiles - (1.5*IQR)
up_bound = upper_quartiles + (1.5*IQR)
#Print statements
print(f"The lower quartile is: {lower_quartiles}")
print(f"The upper quartile is: {upper_quartiles}")
print(f"The interquartile is: {IQR}")
print(f"Values below {low_bound} are outliers.")
print(f"Values below {up_bound} are outliers.")
#Box plot for the drug "Capomulin"
fig1, axis1 = plt.subplots()
axis1.set_title("Tumor Volume for Capomulin")
axis1.set_ylabel("Tumor Volume (mm3)")
axis1.boxplot(capomulin_data_object)
plt.show()
#Ramicane data
ramicane_data_df = final_df[final_df["Drug Regimen"].isin(["Ramicane"])]
ramicane_data_df.head()
#Make Tumor Volume a dataframe object
ramicane_data_object = ramicane_data_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
ramicane_data_object = ramicane_data_object["Tumor Volume (mm3)"]
ramicane_data_object
#Quartile and Outlier calculations for the drug "Ramicane".
quartiles = ramicane_data_object.quantile([.25,.5,.75])
lower_quartiles = quartiles[0.25]
upper_quartiles = quartiles[0.75]
IQR = upper_quartiles - lower_quartiles
low_bound = lower_quartiles - (1.5*IQR)
up_bound = upper_quartiles + (1.5*IQR)
#Print statements
print(f"The lower quartile is: {lower_quartiles}")
print(f"The upper quartile is: {upper_quartiles}")
print(f"The interquartile is: {IQR}")
print(f"Values below {low_bound} are outliers.")
print(f"Values below {up_bound} are outliers.")
#Box plot for the drug "Ramicane"
fig1, axis1 = plt.subplots()
axis1.set_title("Tumor Volume for Ramicane")
axis1.set_ylabel("Tumor Volume (mm3)")
axis1.boxplot(ramicane_data_object)
plt.show()
#Infubinol data
infubinol_data_df = final_df[final_df["Drug Regimen"].isin(["Infubinol"])]
infubinol_data_df.head()
#Make Tumor Volume a dataframe object
infubinol_data_object = infubinol_data_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
infubinol_data_object = infubinol_data_object["Tumor Volume (mm3)"]
infubinol_data_object
#Quartile and Outlier calculations for the drug "Infubinol".
quartiles = infubinol_data_object.quantile([.25,.5,.75])
lower_quartiles = quartiles[0.25]
upper_quartiles = quartiles[0.75]
IQR = upper_quartiles - lower_quartiles
low_bound = lower_quartiles - (1.5*IQR)
up_bound = upper_quartiles + (1.5*IQR)
#Print statements
print(f"The lower quartile is: {lower_quartiles}")
print(f"The upper quartile is: {upper_quartiles}")
print(f"The interquartile is: {IQR}")
print(f"Values below {low_bound} are outliers.")
print(f"Values below {up_bound} are outliers.")
#Box plot for the drug "Infubinol"
fig1, axis1 = plt.subplots()
axis1.set_title("Tumor Volume for Infubinol")
axis1.set_ylabel("Tumor Volume (mm3)")
axis1.boxplot(infubinol_data_object)
plt.show()
#Capomulin data
ceftamin_data_df = final_df[final_df["Drug Regimen"].isin(["Ceftamin"])]
ceftamin_data_df.head()
#Make Tumor Volume a dataframe object
ceftamin_data_object = ceftamin_data_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
ceftamin_data_object = ceftamin_data_object["Tumor Volume (mm3)"]
ceftamin_data_object
#Quartile and Outlier calculations for the drug "Ceftamin".
quartiles = ceftamin_data_object.quantile([.25,.5,.75])
lower_quartiles = quartiles[0.25]
upper_quartiles = quartiles[0.75]
IQR = upper_quartiles - lower_quartiles
low_bound = lower_quartiles - (1.5*IQR)
up_bound = upper_quartiles + (1.5*IQR)
#Print statements
print(f"The lower quartile is: {lower_quartiles}")
print(f"The upper quartile is: {upper_quartiles}")
print(f"The interquartile is: {IQR}")
print(f"Values below {low_bound} are outliers.")
print(f"Values below {up_bound} are outliers.")
#Box plot for the drug "Ceftamin"
fig1, axis1 = plt.subplots()
axis1.set_title("Tumor Volume for Ceftamin")
axis1.set_ylabel("Tumor Volume (mm3)")
axis1.boxplot(ceftamin_data_object)
plt.show()
#Compiliation graph for the top 4 drug regimens.
drug_list= ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
tumor_list = final_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].apply(list)
tumor_df = pd.DataFrame(tumor_list)
tumor_df = tumor_list.reindex(drug_list)
plt.boxplot(tumor_df, labels=drug_list)
plt.title("Tumor Volume for Top 4 Drug Regimens")
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
capomulin_data_df = combined_data_files_df.loc[combined_data_files_df["Drug Regimen"]=="Capomulin"]
capomulin_data_df = capomulin_data_df.reset_index()
capomulin_data_df.head()
#Creating variables for tumor volume vs. time point.
tumor_volume_vs_time_point = capomulin_data_df[capomulin_data_df["Mouse ID"].isin(["s185"])]
tumor_volume_vs_time_point
#Creating a dataframe
tumor_volume_data_vs_time_point_data = tumor_volume_vs_time_point[["Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
tumor_volume_data_vs_time_point_data
#Resetting index on dataframe
line_plot_df = tumor_volume_data_vs_time_point_data.reset_index()
line_plot_df
#Recreating of the dataframe after reset
line_plot_updated = line_plot_df[["Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
line_plot_updated
#Line plot graph
lines = line_plot_updated.plot.line()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
capomulin_data_df = combined_data_files_df.loc[combined_data_files_df["Drug Regimen"]=="Capomulin"]
capomulin_data_df = capomulin_data_df.reset_index()
capomulin_data_df.head()
#Creating dataframe
capomulin_scatter_df = capomulin_data_df[["Mouse ID", "Weight (g)", "Tumor Volume (mm3)"]]
capomulin_scatter_df
#Resetting index and finding the mean by weight and tumor volume.
capomulin_scatter_plot = capomulin_data_df.reset_index()
capomulin_group_weight = capomulin_scatter_plot.groupby("Weight (g)")["Tumor Volume (mm3)"].mean()
#Create dataframe for scatter plot and reset
final_capomulin_plot = pd.DataFrame(capomulin_group_weight).reset_index()
#Plotting graph
scatter_plot = final_capomulin_plot.plot(kind="scatter", x="Weight (g)", y="Tumor Volume (mm3)", grid = True, figsize=(8,8))
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
#Declaring "x" & "y" axis
x_axis = final_capomulin_plot["Weight (g)"]
y_axis = final_capomulin_plot["Tumor Volume (mm3)"]
#Linear regression
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_axis, y_axis)
#Regression calculation
regress_values = x_axis * slope + intercept
line_equation = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
#Plotting the scatter
plt.scatter(x_axis, y_axis)
#Plotting the line
plt.plot(x_axis,regress_values,"r-")
plt.annotate(line_equation, (6,10), fontsize=15, color="red")
#Labels for the graph
plt.xlabel("Weight")
plt.ylabel("Tumor Volume")
plt.title("Tumor Volume vs. Weight Correlation & Regression")
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights * Look across all previously generated figures and tables and write at least three observations or inferences that can be made from the data. Include these observations at the top of notebook.Three major observations I found was first, looking at the box blot the biggest outlier was in infubinon. Looking at the line plot for capomulin, as the time went on the tumor volume decreased, which means that drug worked. Last looking at the scatter plot with the liner regressiong you can see there is a positive correlation between tumor volume and the weight of the mouse, of .84.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
from scipy.stats import linregress
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
#you would use the merge function, and the similar column in both csv files is "Mouse ID"
#drop any dublicates with .drop_duplicates
merge_table = pd.merge(study_results, mouse_metadata, how = "left", on="Mouse ID")
# Display the data table for preview
# to display you would do "name_df.head()"
merge_table.head()
# Checking the number of mice.
total_mice = len(merge_table["Mouse ID"].value_counts())
unique_count=pd.DataFrame([total_mice],columns = ["Total Mice"])
unique_count
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicates = merge_table.loc[merge_table.duplicated(subset = ["Mouse ID","Timepoint"]), 'Mouse ID'].unique()
duplicates
duplicate_count=pd.DataFrame([duplicates],columns = ["Mouse ID"])
duplicate_count
# Optional: Get all the data for the duplicate mouse ID.
duplicate_mice = merge_table.loc[merge_table["Mouse ID"] == "g989", :]
duplicate_mice
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = merge_table[merge_table['Mouse ID'].isin(duplicates)==False]
clean_df.head()
# Checking the number of mice in the clean DataFrame.
clean_count = clean_df["Mouse ID"].unique()
clean_length = len(clean_count)
clean_length
clean_count_df = pd.DataFrame([clean_length], columns = ["Number of MIce"])
clean_count_df
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
tumor_mean = merge_table.groupby(["Drug Regimen"]).mean()["Tumor Volume (mm3)"]
tumor_median = merge_table.groupby(["Drug Regimen"]).median()["Tumor Volume (mm3)"]
tumor_variance = merge_table.groupby(["Drug Regimen"]).var()["Tumor Volume (mm3)"]
tumor_sd = merge_table.groupby(["Drug Regimen"]).std()["Tumor Volume (mm3)"]
tumor_sem = merge_table.groupby(["Drug Regimen"]).sem()["Tumor Volume (mm3)"]
# This method is the most straighforward, creating multiple series and putting them all together at the end.
tumor_summary = pd.DataFrame({
"Tumor Mean":tumor_mean,
"Tumor Median":tumor_median,
"Tumor Variance":tumor_variance,
"Tumor Standard Deviation":tumor_sd,
"Tumor SEM":tumor_sem
})
tumor_summary
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
tumor_summary2 = merge_table.groupby("Drug Regimen").agg({"Tumor Volume (mm3)": ["mean","median","var","std","sem"]})
tumor_summary2
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
total_number_mice = merge_table['Drug Regimen'].value_counts()
total_number_mice.plot(kind = 'bar')
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Mice")
plt.title("Total Number of Mice for each Treatment")
plt.legend()
plt.show()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
plt.bar(total_number_mice.index.values, total_number_mice.values)
plt.xticks(rotation = 45)
plt.legend(["Drug Regimen"])
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
male_female = merge_table["Sex"].value_counts()
male_female.plot(kind = 'pie', autopct='%1.1f%%')
plt.title("Male vs Female")
plt.legend()
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(male_female.values, labels = male_female.index.values, autopct='%1.1f%%')
plt.title("Male vs Female")
plt.legend(["Male","Female"])
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
last_timepoint = merge_table.groupby(['Mouse ID'])['Timepoint'].max()
last_timepoint = last_timepoint.reset_index()
last_timepoint
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
data_merge = last_timepoint.merge(merge_table, on = ['Mouse ID', 'Timepoint'],how = "left")
data_merge
# Put treatments into a list for for loop (and later for plot labels)
list_treatments = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
# Create empty list to fill with tumor vol data (for plotting)
tumor_vol_data = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for i in list_treatments:
# Locate the rows which contain mice on each drug and get the tumor volumes
tumor_volume = data_merge.loc[data_merge["Drug Regimen"] == i, 'Tumor Volume (mm3)']
# add subset
tumor_vol_data.append(tumor_volume)
# Determine outliers using upper and lower bounds
quartiles_info = tumor_volume.quantile([.25,.5,.75])
lower_range = quartiles_info[0.25]
upper_range = quartiles_info[0.75]
middle_range = upper_range - lower_range
lower_bound = lower_range - (1.5*middle_range)
upper_bound = upper_range + (1.5*middle_range)
find_outlier= tumor_volume.loc[(tumor_volume < lower_bound) | (tumor_volume > upper_bound)]
print(f"{i} outliers {find_outlier}")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
blue_out = dict(markerfacecolor = 'blue', markersize = 12)
fig1, ax1 = plt.subplots()
plt.boxplot(tumor_vol_data, labels = list_treatments, flierprops = blue_out)
ax1.set_ylabel("Tumor Volume")
ax1.set_xlabel("Drug Regimen")
plt.title("Tumor Volume of Each Mouse")
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
cap_mice = merge_table.loc[merge_table["Drug Regimen"] == "Capomulin", :]
cap_mice = merge_table.loc[merge_table ["Mouse ID"]=="s185", :]
single_mice = cap_mice.loc[cap_mice["Mouse ID"] == "s185"]
single_mice = single_mice.loc[:, ["Timepoint", "Tumor Volume (mm3)"]]
single_mice = single_mice.reset_index(drop=True)
single_mice.set_index('Timepoint').plot(figsize=(10, 8), linewidth=2.5, color='red')
plt.title("Time Point vs. Tumor Volume")
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume")
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
#scatter_df = cap_mice.loc[:, ["Mouse ID", "Weight (g)", "Tumor Volume (mm3)"]]
cap_mice = merge_table.loc[merge_table["Drug Regimen"] == "Capomulin", :]
scatter_df = cap_mice.loc[:, ["Mouse ID", "Weight (g)", "Tumor Volume (mm3)"]]
average_capomulin = pd.DataFrame(cap_mice.groupby(["Mouse ID", "Weight (g)"])["Tumor Volume (mm3)"].mean()).reset_index()
average_capomulin = average_capomulin.rename(columns={"Tumor Volume (mm3)": "Average Volume"})
average_capomulin = average_capomulin.set_index('Mouse ID')
average_capomulin.plot(kind="scatter", x="Weight (g)", y="Average Volume", grid=True, figsize=(4,4),
title="Weight Vs. Average Tumor Volume")
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
mouse_weight = average_capomulin.iloc[:,0]
avg_tumor_volume = average_capomulin.iloc[:,1]
correlation = st.pearsonr(mouse_weight,avg_tumor_volume)
print(f"The correlation between both factors is {round(correlation[0],2)}")
x_values = average_capomulin['Weight (g)']
y_values = average_capomulin['Average Volume']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.title('Correlation Between Mouse Weight vs. Average Tumor Volume')
plt.xlabel('Mouse Weight')
plt.ylabel('Average Tumor Volume')
plt.show()
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
means = combined_df.groupby(['Drug', 'Timepoint'], as_index=False)['Tumor Volume (mm3)'].mean()
# Convert to DataFrame
tumor_mean_df = pd.DataFrame(means)
# Preview DataFrame
tumor_mean_df.head(12)
# Below is the expected output which matches mine
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
std_error = combined_df.groupby(['Drug', 'Timepoint'])['Tumor Volume (mm3)'].sem()
# Convert to DataFrame
std_error_df = pd.DataFrame(std_error)
std_error_df = std_error_df.reset_index()
# Preview DataFrame
std_error_df.head()
# Below is the expected output which matches mine
# Minor Data Munging to Re-Format the Data Frames
#
#explore and analyze the table, then comment out this section
#tumor_mean_df.columns
#tumor_mean_df.isnull().sum().sum()
#if isnull().sum()>0 then
#re_formatted = pd.pivot_table(tumor_mean_df, values = 'Tumor Volume (mm3)', index='Timepoint', columns='Drug').fillna(0)
#else
re_formatted = pd.pivot_table(tumor_mean_df, values = 'Tumor Volume (mm3)', index='Timepoint', columns='Drug')
# Preview that Reformatting worked
re_formatted.head(20)
# Generate the Plot (with Error Bars)
drug_list = ['Capomulin','Infubinol','Ketapril','Placebo']
mean_df = pd.pivot_table(tumor_mean_df, values = 'Tumor Volume (mm3)', index='Timepoint', columns='Drug')
error_df = pd.pivot_table(std_error_df, values = 'Tumor Volume (mm3)', index='Timepoint', columns='Drug')
mean_df[drug_list].plot()
for k in drug_list:
plt.errorbar(
x=mean_df[k].index,
y=mean_df[k],
yerr=error_df[k],
fmt='',
label = k
)
# Add labels to X and Y axes :: Add title :: Set limits for X and Y axes
plt.title("Tumor Response to Treatment")
plt.xlabel("Time (Days)")
plt.ylabel("Tumor Volume (mm3)")
plt.xlim(-2,47)
# Add in a grid for the chart
plt.grid(axis='y')
# Save the Figure
# Show the Figure
plt.show()
# Below is the expected output. I was unbale to match the legend colors to the lines in the graph!
# I was unable to create the legend symbols e.g. 'dots ... diamond'
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# note missing parameter as_index=False in code
# means = combined_df.groupby(['Drug', 'Timepoint'], as_index=False)['Metastatic Sites'].mean()
means = combined_df.groupby(['Drug', 'Timepoint'])['Metastatic Sites'].mean()
# Convert to DataFrame
metastatic_mean_df = pd.DataFrame(means)
# for column headers, replace above with this code: metastatic_mean_df = metastatic_mean_df.reset_index()
# Preview DataFrame
metastatic_mean_df.head()
# Below is the expected output which matches mine
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
std_error = combined_df.groupby(['Drug', 'Timepoint'])['Metastatic Sites'].sem()
# Convert to DataFrame
metastatic_std_error_df = pd.DataFrame(std_error)
# for column headers, replace above with this code: std_error_df = std_error_df.reset_index()
# Preview DataFrame
metastatic_std_error_df.head()
# Below is the expected output which matches mine
# Minor Data Munging to Re-Format the Data Frames
re_formatted = pd.pivot_table(metastatic_mean_df, values = 'Metastatic Sites', index='Timepoint', columns='Drug')
# Preview that Reformatting worked
re_formatted.head()
# Below is the expected output with 'Tumor Volume (mm3)' as values
# versus my results above, which has 'Metastatic Sites' as values.
# Generate the Plot (with Error Bars)
drug_list = ['Capomulin','Infubinol','Ketapril','Placebo']
mean_df = pd.pivot_table(metastatic_mean_df, values = 'Metastatic Sites', index='Timepoint', columns='Drug')
error_df = pd.pivot_table(metastatic_std_error_df, values = 'Metastatic Sites', index='Timepoint', columns='Drug')
mean_df[drug_list].plot()
for k in drug_list:
plt.errorbar(
x=mean_df[k].index,
y=mean_df[k],
yerr=error_df[k],
fmt='',
label = k
)
# Add labels to X and Y axes :: Add title :: Set limits for X and Y axes
plt.title("Metastic Spread During Treatment")
plt.xlabel("Treatment Duration (Days)")
plt.ylabel("Metastic Sites")
plt.xlim(-2,47)
# Add in a grid for the chart
plt.grid(axis='y')
# Save the Figure
# Show the Figure
plt.show()
# Below is the expected output. I was unbale to match the legend colors to the lines in the graph!
# I was unable to create the legend symbols e.g. 'dots ... diamond'
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
counts = combined_df.groupby(['Drug', 'Timepoint'], as_index=False)['Mouse ID'].count()
# Convert to DataFrame
mouse_count_df = pd.DataFrame(counts)
# Rename column. This code didn't work: mouse_count_df.rename(columns={'Mouse ID':'Mouse Count'})
mouse_count_df.rename(columns={'Mouse ID':'Mouse Count'}, inplace=True)
# Preview DataFrame
mouse_count_df.head()
# Below is the expected output which matches mine
# Minor Data Munging to Re-Format the Data Frames
re_formatted = pd.pivot_table(mouse_count_df, values = 'Mouse Count', index='Timepoint', columns='Drug')
# Preview that Reformatting worked
re_formatted.head()
# Below is the expected output which matches mine
# Generate the Plot (Accounting for percentages)
drug_list = ['Capomulin','Infubinol','Ketapril','Placebo']
reformatted_mouse_count_df = pd.pivot_table(mouse_count_df, values = 'Mouse Count', index='Timepoint', columns='Drug')
reformatted_mouse_count_df = reformatted_mouse_count_df.apply(lambda x : x*100 / 25)
#reformatted_mouse_count_df[drug_list].plot(marker='o')
reformatted_mouse_count_df[drug_list].plot(marker='o')
# Add labels to X and Y axes :: Add title :: Set limits for X and Y axes
plt.title("Survival During Treatment")
plt.xlabel("Time (Days)")
plt.ylabel("Survival Rate (%)")
plt.xlim(-2,47)
# Add in a grid for the chart
plt.grid(axis='both')
# Save the Figure
# Show the Figure
plt.show()
# I was unable to create the legend symbols e.g. 'dots ... diamond'
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
mean_df = pd.pivot_table(tumor_mean_df, values = 'Tumor Volume (mm3)', index='Timepoint', columns='Drug')
# Apply a lambda function to each row
mean_df = mean_df.apply(lambda x: 100 * (x-45) / 45, axis=1)
# Display the last row
mean_df.tail(1)
# The values displayed above are similar to the ones displayed below, but different format
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
research_combined = pd.merge(study_results,mouse_metadata,how ="left", on = "Mouse ID")
# Display the data table for preview
research_combined.head(15)
# Checking the initial number of mice.
research_combined['Mouse ID'].nunique()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
dup_mice = research_combined.loc[research_combined.duplicated(['Mouse ID', 'Timepoint'])]
dup_mice.head()
# Optional: Get all the data for the duplicate mouse ID.
research_combined.loc[research_combined['Mouse ID'] == 'g989']
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
mice_clean = research_combined.loc[research_combined['Mouse ID'] != 'g989']
# Check the new number of mice
mice_clean['Mouse ID'].nunique()
mice_clean
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
grouped_df = mice_clean.groupby(["Drug Regimen"])
MeanTumorVolume = grouped_df["Tumor Volume (mm3)"].mean()
MeanTumorVolume_df = pd.DataFrame(MeanTumorVolume)
MeanTumorVolume_df = MeanTumorVolume_df.rename(columns={"Tumor Volume (mm3)": "Mean Tumor Volume"})
MedianTumorVolume = grouped_df["Tumor Volume (mm3)"].median()
MedianTumorVolume_df = pd.DataFrame(MedianTumorVolume)
MedianTumorVolume_df = MedianTumorVolume_df.rename(columns={"Tumor Volume (mm3)": "Median Tumor Volume"})
TumorVolumeVariance = grouped_df["Tumor Volume (mm3)"].var()
TumorVolumeVariance_df = pd.DataFrame(TumorVolumeVariance)
TumorVolumeVariance_df = TumorVolumeVariance_df.rename(columns={"Tumor Volume (mm3)": "Tumor Volume Variance"})
TumorStandardDiviation = grouped_df["Tumor Volume (mm3)"].std()
TumorStandardDiviation_df = pd.DataFrame(TumorStandardDiviation)
TumorStandardDiviation_df = TumorStandardDiviation_df.rename(columns={"Tumor Volume (mm3)": "Tumor Standard Diviation"})
TumorStandardError = grouped_df["Tumor Volume (mm3)"].sem()
TumorStandardError_df = pd.DataFrame(TumorStandardError)
TumorStandardError_df = TumorStandardError_df.rename(columns={"Tumor Volume (mm3)": "Tumor Standard Error"})
merge1 = pd.merge(MeanTumorVolume_df, MedianTumorVolume_df, how='outer', on='Drug Regimen')
merge2 = pd.merge(merge1, TumorVolumeVariance_df, how='outer', on='Drug Regimen')
merge3 = pd.merge(merge2, TumorStandardDiviation_df, how='outer', on='Drug Regimen')
merge_final = pd.merge(merge3, TumorStandardError_df, how='outer', on='Drug Regimen')
merge_final
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
grouped_final = grouped_df['Tumor Volume (mm3)'].agg(['mean','median','var','std','sem'])
grouped_final
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
mice_per_drug = grouped_df["Tumor Volume (mm3)"].count()
mice_per_drug.plot(kind='bar')
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
plt.bar(mice_per_drug.index, mice_per_drug)
plt.xticks(rotation=45)
# Generate a pie plot showing the distribution of female versus male mice using pandas
sex_df = mice_clean.groupby(["Sex"])
mice_sex = sex_df["Sex"].count()
mice_sex.plot(kind='pie',autopct='%.2f%%')
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(mice_sex, autopct='%.2f%%',labels=mice_sex.index)
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
df = mice_clean.groupby('Mouse ID').max()
df.loc[df.index=='b879']
mice_clean.loc[mice_clean['Mouse ID']== 'f932']
# Start by getting the last (greatest) timepoint for each mouse
max_timepoint = mice_clean.groupby('Mouse ID').Timepoint.max()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
max_timepoint_df = mice_clean.merge(max_timepoint, on=['Mouse ID','Timepoint'])
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Create empty list to fill with tumor vol data (for plotting)
drugs = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
tumors = []
tumor_volume=[]
# Put treatments into a list for for loop (and later for plot labels)
for drug in drugs:
final_tumor = max_timepoint_df.loc[max_timepoint_df['Drug Regimen'] == drug, 'Tumor Volume (mm3)']
tumors.append(final_tumor)
quartiles = final_tumor.quantile([.25, .50, .75])
quartiles = pd.DataFrame(quartiles)
tumors = pd.DataFrame(tumors)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Determine outliers using upper and lower bounds
lowerq = final_tumor.quantile([0.25]).mean()
upperq = final_tumor.quantile([0.75]).mean()
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
# A high IQR determines that we do have outliers in the data across a calculation of all 4 drug regimens
print(f"The lower quartile of tumors in mice is: {lowerq}")
print(f"The upper quartile of tumors in mice is: {upperq}")
print(f"The interquartile range of tumors in mice is: {iqr}")
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
tumors.boxplot()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
mice_clean.loc[mice_clean['Mouse ID']== 'b128'].plot(x='Timepoint', y='Tumor Volume (mm3)')
plt.xlabel('Timepoint')
plt.ylabel('Tumor Volume (mm3)')
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
mice_clean.loc[mice_clean['Drug Regimen']== 'Capomulin'].plot.scatter(x='Weight (g)', y='Tumor Volume (mm3)')
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
Capomulin_mice = mice_clean.loc[mice_clean['Drug Regimen']== 'Capomulin']
x_values = Capomulin_mice['Weight (g)']
y_values = Capomulin_mice['Tumor Volume (mm3)']
correlation = st.pearsonr(x_values,y_values)
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.xlabel('Mouse Weight in Grams')
plt.ylabel('Tumor Volume (mm3)')
plt.show()
print(f"The correlation between both factors is {round(correlation[0],2)}")
# Final Analysis:
# I couldn't get the correct boxplot information to show up instead of a total for all four where I needed different
# values in a list for each of the Drugs being studied.
# My scatter plot and correlation are not matching the example as well.
# It is clear that Capomulin is the best treatment option.
# The heavier the mouse the larger the tumor is for the most part.
###Output
_____no_output_____
###Markdown
Observations and Insights Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
Tumor_group = combined_df.groupby(['Drug','Timepoint']).mean()['Tumor Volume (mm3)']
Tumor_group_df = pd.DataFrame(Tumor_group)
Tumor_group_df = Tumor_group_df.reset_index()
Tumor_group_df.head(10)
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
Tumor_Volumes_Error = combined_df.groupby(['Drug', 'Timepoint']).sem()['Tumor Volume (mm3)']
Tumor_Vol_Error_df = pd.DataFrame(Tumor_Volumes_Error)
Tumor_Vol_Error_df = Tumor_Vol_Error_df.reset_index()
Tumor_Vol_Error_df.head(10)
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
Tumor_group_df = Tumor_group_df.reset_index()
Tumor_pivot_group_df = Tumor_group_df.pivot(index='Timepoint', columns='Drug')['Tumor Volume (mm3)']
Tumor_Vol_Error_df = Tumor_Vol_Error_df.reset_index()
Tumor_pivot_Vol_Error = Tumor_Vol_Error_df.pivot(index='Timepoint', columns='Drug')['Tumor Volume (mm3)']
Tumor_pivot_group_df.head()
# Generate the Plot (with Error Bars)
plt.errorbar(Tumor_pivot_group_df.index, Tumor_pivot_group_df['Capomulin'], yerr=Tumor_pivot_Vol_Error['Capomulin'], color="r", marker="o", markersize=5, linestyle="dashed", linewidth=0.50)
plt.errorbar(Tumor_pivot_group_df.index, Tumor_pivot_group_df['Infubinol'], yerr=Tumor_pivot_Vol_Error['Infubinol'], color="b", marker="^", markersize=5, linestyle="dashed", linewidth=0.50)
plt.errorbar(Tumor_pivot_group_df.index, Tumor_pivot_group_df['Ketapril'], yerr=Tumor_pivot_Vol_Error['Ketapril'], color="g", marker="s", markersize=5, linestyle="dashed", linewidth=0.50)
plt.errorbar(Tumor_pivot_group_df.index, Tumor_pivot_group_df['Placebo'], yerr=Tumor_pivot_Vol_Error['Placebo'], color="k", marker="d", markersize=5, linestyle="dashed", linewidth=0.50)
plt.title("Tumor Response to Treatment")
plt.ylabel("Tumor Volume (mm3)")
plt.xlabel("Time (Days)")
plt.grid(axis='y')
plt.legend(['Capomulin', 'Infubinol', 'Ketapril', 'Placebo'],loc="best", fontsize="small", fancybox=True)
# Save the Figure
plt.savefig("analysis/Fig1.png")
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
Site_Data_mean = combined_df.groupby(['Drug','Timepoint']).mean()['Metastatic Sites']
Site_Data_mean = pd.DataFrame(Site_Data_mean)
Site_Data_mean.head()
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
Site_Data_error = combined_df.groupby(['Drug','Timepoint']).sem()['Metastatic Sites']
Site_Data_error = pd.DataFrame(Site_Data_error)
Site_Data_error.head()
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
Site_Data_mean = Site_Data_mean.reset_index()
Site_Data_mean_Pivot = Site_Data_mean.pivot(index='Timepoint', columns='Drug')['Metastatic Sites']
Site_Data_error = Site_Data_error.reset_index()
Site_Data_error_Pivot = Site_Data_error.pivot(index='Timepoint', columns='Drug')['Metastatic Sites']
Site_Data_mean_Pivot.head()
# Generate the Plot (with Error Bars)
plt.errorbar(Site_Data_mean_Pivot.index, Site_Data_mean_Pivot['Capomulin'], yerr=Site_Data_error_Pivot['Capomulin'], color="r", marker="o", markersize=5, linestyle="dashed", linewidth=0.50)
plt.errorbar(Site_Data_mean_Pivot.index, Site_Data_mean_Pivot['Infubinol'], yerr=Site_Data_error_Pivot['Infubinol'], color="b", marker="^", markersize=5, linestyle="dashed", linewidth=0.50)
plt.errorbar(Site_Data_mean_Pivot.index, Site_Data_mean_Pivot['Ketapril'], yerr=Site_Data_error_Pivot['Ketapril'], color="g", marker="s", markersize=5, linestyle="dashed", linewidth=0.50)
plt.errorbar(Site_Data_mean_Pivot.index, Site_Data_mean_Pivot['Placebo'], yerr=Site_Data_error_Pivot['Placebo'], color="k", marker="d", markersize=5, linestyle="dashed", linewidth=0.50)
plt.title('Metastatic Spread During Treatment')
plt.ylabel('Met. Sites')
plt.xlabel('Treatment Duration (Days)')
plt.grid(axis='y')
plt.legend(['Capomulin', 'Infubinol', 'Ketapril', 'Placebo'], loc="best", fontsize="small", fancybox=True)
# Save the Figure
plt.savefig("analysis/Fig2.png")
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
Mouse_Count = combined_df.groupby(["Drug", "Timepoint"]).count()["Tumor Volume (mm3)"]
Mouse_Count = pd.DataFrame({'Mouse Count': Mouse_Count})
Mouse_Count.head().reset_index()
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
Mouse_Count = Mouse_Count.reset_index()
Mouse_Count_pivot = Mouse_Count.pivot(index='Timepoint', columns='Drug')['Mouse Count']
Mouse_Count_pivot.head()
# Generate the Plot (Accounting for percentages)
plt.plot(100 * Mouse_Count_pivot['Capomulin'] / 25 , "ro", linestyle="dashed", markersize=5, linewidth=0.50)
plt.plot(100 * Mouse_Count_pivot['Infubinol'] / 25, "b^", linestyle="dashed", markersize=5, linewidth=0.50)
plt.plot(100 * Mouse_Count_pivot["Ketapril"] / 25, "gs", linestyle="dashed", markersize=5, linewidth=0.50)
plt.plot(100 * Mouse_Count_pivot["Placebo"] / 25 , "kd", linestyle="dashed", markersize=6, linewidth=0.50)
plt.title("Survival During Treatment")
plt.ylabel("Survival Rate (%)")
plt.xlabel("Time (Days)")
plt.grid(True)
plt.legend(['Capomulin', 'Infubinol', 'Ketapril', 'Placebo'], loc="best", fontsize="small", fancybox=True)
# Save the Figure
plt.savefig("analysis/Fig3.png")
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
Summary_bar_mean = 100 * (Tumor_pivot_group_df.iloc[-1] - Tumor_pivot_group_df.iloc[0]) / Tumor_pivot_group_df.iloc[0]
Summary_bar_mean_error = 100 * (Tumor_pivot_Vol_Error.iloc[-1] - Tumor_pivot_Vol_Error.iloc[0]) /Tumor_pivot_Vol_Error.iloc[0]
Summary_bar_mean
# Store all Relevant Percent Changes into a Tuple
Summary_changes = (Summary_bar_mean["Capomulin"],
Summary_bar_mean["Infubinol"],
Summary_bar_mean["Ketapril"],
Summary_bar_mean["Placebo"])
# Splice the data between passing and failing drugs
fig, ax = plt.subplots()
ind = np.arange(len(Summary_changes))
width = 1
rectsPass = ax.bar(ind[0], Summary_changes[0], width, color='green')
rectsFail = ax.bar(ind[1:], Summary_changes[1:], width, color='red')
# Orient widths. Add labels, tick marks, etc.
ax.set_ylabel('% Tumor Volume Change')
ax.set_title('Tumor Change Over 45 Day Treatment')
ax.set_xticks(ind + 0.5)
ax.set_xticklabels(('Capomulin', 'Infubinol', 'Ketapril', 'Placebo'))
ax.set_autoscaley_on(False)
ax.set_ylim([-30,70])
ax.grid(True)
# Use functions to label the percentages of changes
def autolabelFail(rects):
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., 3,
'%d%%' % int(height),
ha='center', va='bottom', color="white")
def autolabelPass(rects):
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., -8,
'-%d%% ' % int(height),
ha='center', va='bottom', color="white")
# Call functions to implement the function calls
autolabelPass(rectsPass)
autolabelFail(rectsFail)
# Save the Figure
fig.savefig("analysis/Fig4.png")
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
merge_df = pd.merge(study_results, mouse_metadata, how="left", on='Mouse ID')
# Display the data table for preview
merge_df.head()
# # Checking the number of mice.
# mice_data = merge_df["Mouse ID"].value_counts()
# mice_data_df = pd.DataFrame(mice_data)
# mice_data_df.head()
len(merge_df["Mouse ID"].unique())
num_mice = len(merge_df["Mouse ID"].unique())
print(f"Number of Mice: {num_mice}")
# mice_df = len(mice_data)
# mice_df
# print(f"Number of Mice: {mice_df}")
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mice = merge_df.loc[merge_df.duplicated(subset=["Mouse ID", "Timepoint"]), "Mouse ID"].unique()
duplicate_mice
##### .DUPLICATED() IT'S RIGHT
# Optional: Get all the data for the duplicate mouse ID.
duplicate_mice_id = merge_df.loc[merge_df["Mouse ID"]=="g989"]
duplicate_mice_id
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# print all rows where mouse ID column is NOT in duplicate_mice
clean_df = merge_df[merge_df["Mouse ID"].isin(duplicate_mice)==False]
clean_df.head()
# Checking the number of mice in the clean DataFrame.
len(clean_df["Mouse ID"].unique())
num_mice_clean_data = len(clean_df["Mouse ID"].unique())
print(f"Number of Mice from a Cleaned DataFrame: {num_mice_clean_data}")
###Output
Number of Mice from a Cleaned DataFrame: 248
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
average = clean_df.groupby("Drug Regimen").mean()["Tumor Volume (mm3)"]
middle = clean_df.groupby("Drug Regimen").median()["Tumor Volume (mm3)"]
variance = clean_df.groupby("Drug Regimen").var()["Tumor Volume (mm3)"]
stan_dev = clean_df.groupby("Drug Regimen").std()["Tumor Volume (mm3)"]
semi = clean_df.groupby("Drug Regimen").sem()["Tumor Volume (mm3)"]
stat_table = pd.DataFrame({
"Tumor Volume Mean": average,
"Tumor Volume Median": middle,
"Tumor Volume Variance": variance,
"Tumor Volume Standard Deviation": stan_dev,
"Tumor Volume Semi": semi
})
stat_table
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
#df.agg(['sum', 'min'])
stat_table_new = clean_df.groupby("Drug Regimen").agg({"Tumor Volume (mm3)":["mean", "median", "var", "std", "sem"]})
stat_table_new
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
total_num = clean_df["Drug Regimen"].value_counts()
total_num.plot(kind="bar")
plt.title("Total Number of Measurements Taken on Each Drug")
plt.ylabel("Number of Unique Mice Tested")
plt.show()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
plt.bar(total_num.index.values, total_num.values)
plt.title("Total Number of Measurements Taken on Each Drug")
plt.xticks(rotation=90)
plt.ylabel("Number of Unique Mice Tested")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_range = clean_df["Sex"].value_counts()
#gender_range
#total_num = clean_df["Drug Regimen"].value_counts()
gender_range.plot(kind="pie", autopct="%1.1f%%")
plt.title("Male vs Female")
plt.ylabel("Sex")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(gender_range.values, labels=gender_range.index.values, autopct="%1.1f%%")
plt.title("Male vs Female")
#plt.xticks(rotation=90)
plt.ylabel("Sex")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
### (which df? merge_df, clean_df, mouse_metadata, study_results)
timepoint_df = clean_df.sort_values(by='Timepoint', ascending=False)
timepoint_df
## drop duplicates
timepoint_df = clean_df.drop_duplicates(subset=["Mouse ID"])
timepoint_df
max_tumor_df = clean_df.groupby(["Mouse ID"])['Timepoint'].max()
max_tumor_df = max_tumor.reset_index()
max_tumor_df
new_merged_data = max_tumor_df.merge(clean_df,on=['Mouse ID','Timepoint'],how="left")
new_merged_data
four_drugs_list = ["Capomulin", "Ramicane", "Infubinol","Ceftamin"]
# timepoint_df = timepoint_df.loc[(timepoint_df["Drug Regimen"].isin(four_drugs_list))]
# timepoint_df
timepoint_df = clean_df.groupby ("Mouse ID").max()["Timepoint"]
timepoint_df
# # ROADMAP
# # Create a clean DataFrame by dropping the duplicate mouse by its ID.
# # print all rows where mouse ID column is NOT in duplicate_mice
# clean_df = merge_df[merge_df["Mouse ID"].isin(duplicate_mice)==False]
# clean_df.head()
#final_tumor_volume = stat_table_new.groupby(["Drug Regimen"]).agg({"Tumor Volume (mm3)":['mean','median','var','std','sem']})
# stat_table_new = clean_df.groupby("Drug Regimen").agg({"Tumor Volume (mm3)":["mean", "median", "var", "std", "sem"]})
# stat_table_new
# final_tumor_volume = stat_table("Timepoint").agg({"Drug Regimen": ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]})
# final_tumor_volume
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# treatments
##########TESTING############## treatments = timepoint_df["Drug Regimen"].unique().tolist()
four_drugs_list = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
# placeholder for total volume
total_volume = []
tumor_volume_list = []
for drug in four_drugs_list:
tumor_volume = clean_df.groupby ("Mouse ID").max()["Tumor Volume (mm3)"]
tumor_volume
tumor_volume_df = pd.DataFrame(four_drugs_list)
tumor_volume_df
final_tumor_volume = new_merged_data.loc[new_merged_data["Drug Regimen"] == drug, "Tumor Volume (mm3)"]
final_tumor_volume_df = pd.DataFrame(final_tumor_volume)
final_tumor_volume_df.head()
quartiles = final_tumor_volume.quantile([.25,.5,.75])
print(quartiles)
quartiles_df = pd.DataFrame(quartiles)
quartiles_df
quartiles[0]
tumor_volume.append(final_tumor_volume)
tumor_volume_df = pd.DataFrame(tumor_volume)
tumor_volume_df
#merge_df = pd.merge(study_results, mouse_metadata, how="left", on='Mouse ID')
clean_tumor_volume_df = pd.merge(clean_df, tumor_volume_df, left_index=True, right_index=True)
clean_tumor_volume_df
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
quartiles = tumor_volume_df.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
tumor_volume.append(tumor_volume_df)
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
df_tumor_mean = df_data.groupby(["Drug","Timepoint"]).mean()
# Convert to DataFrame
df_tumor_mean = df_tumor_mean.drop("Metastatic Sites", axis = 1)
# Preview DataFrame
df_tumor_mean
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
df_tumor_sem = df_data.groupby(["Drug","Timepoint"]).sem()
# Convert to DataFrame
df_tumor_sem = df_tumor_sem.drop("Metastatic Sites", axis = 1)
df_tumor_sem = df_tumor_sem.drop("Mouse ID", axis = 1)
# Preview DataFrame
df_tumor_sem
# Minor Data Munging to Re-Format the Data Frames
df_tumor_by_drug_mean = df_tumor_mean.unstack(0)
df_tumor_by_drug_sem = df_tumor_sem.unstack(0)
# Preview that Reformatting worked
df_tumor_by_drug_mean
df_tumor_by_drug_sem
# Generate the Plot (with Error Bars)
x_axis = list(df_tumor_by_drug_mean.index)
drug_plot, ax, = plt.subplots()
#Capomulin
plt.plot(x_axis, df_tumor_by_drug_mean['Tumor Volume (mm3)']["Capomulin"], marker = 'd', color = 'red', label = "Capomulin")
ax.errorbar(x_axis, df_tumor_by_drug_mean['Tumor Volume (mm3)']["Capomulin"],df_tumor_by_drug_sem['Tumor Volume (mm3)']["Capomulin"], color = 'red')
#Infubinol
plt.plot(x_axis, df_tumor_by_drug_mean['Tumor Volume (mm3)']["Infubinol"], marker = 'o', color = 'blue', label = "Infubinol")
ax.errorbar(x_axis, df_tumor_by_drug_mean['Tumor Volume (mm3)']["Infubinol"],df_tumor_by_drug_sem['Tumor Volume (mm3)']["Infubinol"], color = 'blue')
#Ketapril
plt.plot(x_axis, df_tumor_by_drug_mean['Tumor Volume (mm3)']["Ketapril"], marker = '^', color = 'green', label = "Ketapril")
ax.errorbar(x_axis, df_tumor_by_drug_mean['Tumor Volume (mm3)']["Ketapril"],df_tumor_by_drug_sem['Tumor Volume (mm3)']["Ketapril"], color = 'green')
#Placebo
plt.plot(x_axis, df_tumor_by_drug_mean['Tumor Volume (mm3)']["Placebo"], marker = 's', color = 'black', label = "Placebo")
ax.errorbar(x_axis, df_tumor_by_drug_mean['Tumor Volume (mm3)']["Placebo"],df_tumor_by_drug_sem['Tumor Volume (mm3)']["Placebo"], color = 'black')
plt.legend()
plt.xlabel("Time (Days)")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Tumor Response to Treatment")
ax.grid(axis = 'y')
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Metastatic Data Grouped by Drug and Timepoint
df_met_mean = df_data.groupby(["Drug","Timepoint"]).mean()
# Convert to DataFrame
df_met_mean = df_met_mean.drop("Tumor Volume (mm3)", axis = 1)
# Preview DataFrame
df_met_mean.head()
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
df_met_sem = df_data.groupby(["Drug","Timepoint"]).sem()
# Convert to DataFrame
df_met_sem = df_met_sem.drop("Tumor Volume (mm3)", axis = 1)
df_met_sem = df_met_sem.drop("Mouse ID", axis = 1)
# Preview DataFrame
df_met_sem.head()
# Minor Data Munging to Re-Format the Data Frames
df_met_by_drug_mean = df_met_mean.unstack(0)
df_met_by_drug_sem = df_met_sem.unstack(0)
# Preview that Reformatting worked
df_met_by_drug_mean
df_met_by_drug_sem
# Generate the Plot (with Error Bars)
x_axis = list(df_met_by_drug_mean.index)
drug_plot, ax, = plt.subplots()
#Capomulin
plt.plot(x_axis, df_met_by_drug_mean['Metastatic Sites']["Capomulin"], marker = 'd', color = 'red', label = "Capomulin")
ax.errorbar(x_axis, df_met_by_drug_mean['Metastatic Sites']["Capomulin"],df_met_by_drug_sem['Metastatic Sites']["Capomulin"], color = 'red')
#Infubinol
plt.plot(x_axis, df_met_by_drug_mean['Metastatic Sites']["Infubinol"], marker = 'o', color = 'blue', label = "Infubinol")
ax.errorbar(x_axis, df_met_by_drug_mean['Metastatic Sites']["Infubinol"],df_met_by_drug_sem['Metastatic Sites']["Infubinol"], color = 'blue')
#Ketapril
plt.plot(x_axis, df_met_by_drug_mean['Metastatic Sites']["Ketapril"], marker = '^', color = 'green', label = "Ketapril")
ax.errorbar(x_axis, df_met_by_drug_mean['Metastatic Sites']["Ketapril"],df_met_by_drug_sem['Metastatic Sites']["Ketapril"], color = 'green')
#Placebo
plt.plot(x_axis, df_met_by_drug_mean['Metastatic Sites']["Placebo"], marker = 's', color = 'black', label = "Placebo")
ax.errorbar(x_axis, df_met_by_drug_mean['Metastatic Sites']["Placebo"],df_met_by_drug_sem['Metastatic Sites']["Placebo"], color = 'black')
plt.legend()
plt.xlabel("Time (Days)")
plt.ylabel("Metastatic Sites")
plt.title("Metastatic Spread During Treatment")
ax.grid(axis = 'y')
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
df_mice_count = df_data.groupby(["Drug","Timepoint"]).count()
# Convert to DataFrame
df_mice_count = df_mice_count.drop("Metastatic Sites", axis = 1)
df_mice_count = df_mice_count.drop("Tumor Volume (mm3)", axis = 1)
# Preview DataFrame
df_mice_count
# Minor Data Munging to Re-Format the Data Frames
df_mice_count_by_drug = df_mice_count.unstack(0)
df_mice_0 = list(df_mice_count_by_drug.iloc[0])
df_mice_pc = 100*(df_mice_count_by_drug.div(df_mice_0))
# Preview the Data Frame
df_mice_pc
# Generate the Plot (with Error Bars)
x_axis = list(df_mice_pc.index)
drug_plot, ax, = plt.subplots()
#Capomulin
plt.plot(x_axis, df_mice_pc['Mouse ID']["Capomulin"], marker = 'd', color = 'red', label = "Capomulin")
#Infubinol
plt.plot(x_axis, df_mice_pc['Mouse ID']["Infubinol"], marker = 'o', color = 'blue', label = "Infubinol")
#Ketapril
plt.plot(x_axis, df_mice_pc['Mouse ID']["Ketapril"], marker = '^', color = 'green', label = "Ketapril")
#Placebo
plt.plot(x_axis, df_mice_pc['Mouse ID']["Placebo"], marker = 's', color = 'black', label = "Placebo")
plt.legend()
plt.xlabel("Time (Days)")
plt.ylabel("Survival Rate (%)")
plt.title("Survival During Treatment")
ax.grid(axis = 'y')
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
df_tumor_0 = list(df_tumor_by_drug_mean.iloc[0])
df_tumor_45 = list(df_tumor_by_drug_mean.iloc[-1])
print(df_tumor_0)
print(df_tumor_45)
perc_change = [100*(y/x-1) for x,y in zip(df_tumor_0, df_tumor_45)]
# Display the data to confirm
drug_list = list(df_tumor_by_drug_mean['Tumor Volume (mm3)'])
df_tumor_drug_change = pd.DataFrame({"Drug": drug_list,"% Change":perc_change})
df_tumor_drug_change
#Store all Relevant Percent Changes into a Tuple
def convert(perc_change):
return(tuple(perc_change))
perc_change
# Splice the data between passing and failing drugs
# fail_perc = []
# pass_perc = []
# for k in perc_change:
# if k>0:
# fail_perc.append(k)
# else:
# pass_perc.append(k)
df_tumor_drug_change['Positive'] = df_tumor_drug_change['% Change'] > 0
# Orient widths. Add labels, tick marks, etc.
plt.figure(figsize = (20,4))
plt.grid()
plt.bar(drug_list,perc_change, width =1, align = "edge", color=df_tumor_drug_pc_change.Positive.map({True: 'g', False: 'r'}))
# Use functions to label the percentages of changes
df_tumor_drug_change.dtypes
df_tumor_drug_pc_change["% Change"] = df_tumor_drug_change["% Change"].map("{:.2f}%".format)
for i,j,n in zip(df_tumor_drug_pc_change["% Change"],df_tumor_drug_pc_change["Drug"],df_tumor_drug_pc_change["Positive"]):
plt.text(s=i, x=j, y=5, color="w", verticalalignment="center", size=18)
plt.text(s=i, x=j, y=-5, color="w", verticalalignment="center", size=18)
# Call functions to implement the function calls
plt.ylabel("% Tumor Volume Change", size = 12)
plt.title("Tumor Change Over 45 Day Treatment", size = 20)
# Save the Figure
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
cmp = pd.merge(study_results, mouse_metadata, how="left", on="Mouse ID")
# Display the data table for preview
cmp.head()
# Checking the number of mice.
cmp['Mouse ID'].nunique()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# set the index to the mouse ID
# check the mouse data for ID
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
# series variable to hold Tumor Volume Data grouped by Drug Regimen
# variable to hold the Mean Tumor Volume Data Grouped by Drug Regimen
# variable to hold median Tumor Volume Data Grouped by Drug Regimen
# variable to hold the Tumor Volume Variance Data Grouped by Drug Regimen
# variable to hold the Tumor Volume Standard Deviation Data Grouped by Drug Regimen
# variable to hold the Tumor Volume SEM Data Grouped by Drug Regimen
# Convert to DataFrame
# Preview DataFrame
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
# list of unique drug regimens
# drug regimen as x-axis values for plotting
# drop all duplicate mice
# get mice counts per drug
# plot the mouse counts for each drug using pandas
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
# plot the bar graph of mice count per drug regimen
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# set drug regimen as index and drop associated regimens while only keeping Capomulin, Ramicane, Infubinol, and Ceftamin
# isolated view of just capomulin for later use
# Reset index so drug regimen column persists after inner merge
# get mouse count per drug
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# show all rows of data
# Put treatments into a list for for loop (and later for plot labels)
#set drugs to be analyzed, colors for the plots, and markers
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# Determine outliers using upper and lower bounds
# add subset
# tumor volumes for each Drug Regimen
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
#change index to mouse ID
#remove other mouse IDs so only s185 shows
#set the x-axis equal to the Timepoint and y-axis to Tumor Volume
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
# group by mouse ID to find average tumor volume
# establish x-axis value for the weight of the mice
# produce scatter plot of the data
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
#establish x and y values and find St. Pearson Correlation Coefficient for Mouse Weight and Tumor Volume Avg
#print St. Pearson Correlation Coefficient
# establish linear regression values
# linear regression line
# scatter plot of the data
###Output
The correlation between both factors is 0.84
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_df = pd.read_csv(mouse_metadata_path)
study_df = pd.read_csv(study_results_path)
# Combine the data into a single dataset
mergedMice = pd.merge(mouse_df, study_df, on ="Mouse ID", how = "inner")
# Display the data table for preview
mergedMice
# Checking the number of mice.
mergedMice["Mouse ID"].count()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
cleanMice = mergedMice.drop_duplicates(subset=["Mouse ID","Timepoint"])
cleanMice.head(22)
# Checking the number of mice in the clean DataFrame.
cleanMice["Mouse ID"].count()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
groupedMice = cleanMice.groupby(["Drug Regimen"])
meanMice = groupedMice["Tumor Volume (mm3)"].mean()
medianMice =groupedMice["Tumor Volume (mm3)"].median()
varianceMice =groupedMice["Tumor Volume (mm3)"].var()
stdMice =groupedMice["Tumor Volume (mm3)"].std()
semMice =groupedMice["Tumor Volume (mm3)"].sem()
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
groupedMice = cleanMice.groupby(["Drug Regimen"])
aggMice = groupedMice["Tumor Volume (mm3)"].agg(['mean','median','var','std','sem'])
aggMice
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of
#unique mice tested on each drug regimen using pandas.
totalMice = groupedMice["Mouse ID"].count()
totalMiceBar = totalMice.plot.bar(x="Drug", y = "Number of Mice", rot = 0,figsize=(10,10))
totalMiceBar
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
genderMice = cleanMice.groupby(["Sex"])
genderMice = genderMice["Mouse ID"].count()
genderMice = genderMice.rename("Sex")
totalMicePie = genderMice.plot.pie(y = "Sex", figsize=(5,5))
totalMicePie
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
lastMiceTime = cleanMice.loc[cleanMice["Timepoint"]== 45,:]
capDrugs = lastMiceTime.loc[lastMiceTime["Drug Regimen"]=="Capomulin", :]
ramDrugs =lastMiceTime.loc[lastMiceTime["Drug Regimen"]=="Ramicane", :]
infDrugs = lastMiceTime.loc[lastMiceTime["Drug Regimen"]=="Infubinol", :]
cefDrugs = lastMiceTime.loc[lastMiceTime["Drug Regimen"]=="Ceftamin", :]
fourDrugsMice = pd.concat([capDrugs, ramDrugs, infDrugs, cefDrugs])
fourDrugsMice
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
#,"Ramicane","Infubinol" "Ceftamin"
# Put treatments into a list for for loop (and later for plot labels)
capDrugs = capDrugs["Tumor Volume (mm3)"]
capDrugs = capDrugs.rename("Capomulin")
ramDrugs = ramDrugs["Tumor Volume (mm3)"]
ramDrugs = ramDrugs.rename("Ramicane")
infDrugs = infDrugs["Tumor Volume (mm3)"]
infDrugs = infDrugs.rename("Infubinol")
cefDrugs = cefDrugs["Tumor Volume (mm3)"]
cefDrugs = cefDrugs.rename("Ceftamin")
cefDrugs
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
fig1, ax1 = plt.subplots()
ax1.set_title("Tumor Size per Regime")
ax1.set_ylabel("final tumor size")
ax1.boxplot([capDrugs,ramDrugs,infDrugs,cefDrugs])
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights After parsing through we can conclude that:1. Out of the four drug regimens analyzed (Capomulin, Ramicane, Infubinol, Ceftamin), based on the whisker plots, it shows that Ramicane has the best results when measuring final tumor volumen per drug regimen. However, a close second is the drug regimen Capomulin.2. When analyzing the data for the capomulin drug regimen for mice weight vs average tumor volume, there seems to be a linear relationship between these two parameters with an R squared value of 0.709.3. Based on the analysis, it could be recommended to pursue further trials with Capomulin and Ramicane.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
import os
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
merged_df = pd.merge(mouse_metadata , study_results, on="Mouse ID", how="outer")
# Display the data table for preview
merged_df.head()
# Checking the number of mice.
mouse_count = len(merged_df['Mouse ID'].unique())
print(f'The current mouse count is: {mouse_count}')
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
merged_df.loc[merged_df.duplicated(subset=["Mouse ID", "Timepoint"]) == True, "Mouse ID"].unique()
# Optional: Get all the data for the duplicate mouse ID.
merged_dup_df = merged_df[merged_df['Mouse ID'] == 'g989']
merged_dup_df.head()
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
merged_newindex_df = merged_df.set_index('Mouse ID')
merged_drop_df = (merged_newindex_df.drop('g989'))
merged_drop_df = merged_drop_df.reset_index()
merged_drop_df.head()
# Checking the number of mice in the clean DataFrame.
merged_drop_count = len(merged_drop_df['Mouse ID'].unique())
print(f'The new mice count without duplicate mice data is: {merged_drop_count}')
###Output
The new mice count without duplicate mice data is: 248
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
merged_dropped_groupby_df = merged_drop_df.groupby('Drug Regimen')
tumor_mean = merged_dropped_groupby_df['Tumor Volume (mm3)'].mean()
tumor_median = merged_dropped_groupby_df['Tumor Volume (mm3)'].median()
tumor_var = merged_dropped_groupby_df['Tumor Volume (mm3)'].var()
tumor_std = merged_dropped_groupby_df['Tumor Volume (mm3)'].std()
tumor_sem = merged_dropped_groupby_df['Tumor Volume (mm3)'].sem()
# This method is the most straighforward, creating multiple series and putting them all together at the end.
sum_stats_df = pd.DataFrame({'Tumor Vol. Average (mm3)': round(tumor_mean,2),
'Tumor Vol. Median (mm3)': round(tumor_median,2),
'Tumor Vol. Variance': round(tumor_var,2),
'Tumor Vol. Standard Deviation': round(tumor_std),
'Vol. Mean Standard Error': round(tumor_sem,3)})
sum_stats_df
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
drug_reg_mouseid = merged_drop_df[['Drug Regimen','Mouse ID']]
drug_reg_mouseid.head()
# This method produces everything in a single groupby function
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
group_drugreg = merged_drop_df.groupby('Drug Regimen')
grouped_drug_count = group_drugreg['Mouse ID'].nunique()
# Create a DataFrame
bar_df = pd.DataFrame(grouped_drug_count)
# Plot with Panda
bar_df.plot(kind='bar', figsize=(10,5))
# Label graph
plt.title("Mice Count per Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Mice Count")
plt.xticks(rotation=45)
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
y_axis = grouped_drug_count
x_axis = grouped_drug_count.index
plt.bar(x=x_axis,height=y_axis, color='r', alpha=0.5, align="center")
plt.title("Mice Count per Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Mice Count")
plt.xticks(rotation=45)
# Generate a pie plot showing the distribution of female versus male mice using pandas
idgroup_df = merged_drop_df.groupby('Sex')
gender_series = idgroup_df['Mouse ID'].nunique()
# Create dataframe
pie_df = pd.DataFrame(gender_series)
# Plot
pie_df.plot(kind='pie', subplots=True, autopct='%1.1f%%', title='Males vs Females')
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(pie_df['Mouse ID'], labels = ['Female', 'Male'], autopct='%1.1f%%')
plt.title('Males vs Females')
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
merged_drop_df.head()
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Capomulin, Ramicane, Infubinol, and Ceftamin
mouseID_group = merged_drop_df.groupby('Mouse ID')
max_df = pd.DataFrame(mouseID_group['Timepoint'].max())
maxvol_df = pd.merge(merged_drop_df, max_df, on=['Mouse ID','Timepoint'], how='right')
maxvol_major_four_df = maxvol_df.loc[(maxvol_df['Drug Regimen'] == 'Capomulin') | (maxvol_df['Drug Regimen'] == 'Ramicane') | (maxvol_df['Drug Regimen'] == 'Infubinol') | (maxvol_df['Drug Regimen'] == 'Ceftamin')]
# Put treatments into a list for for loop (and later for plot labels)
treatments = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
# Create empty list to fill with tumor vol data (for plotting)
tumor_vol_data = []
iqr_list = []
lower_list = []
upper_list = []
outliers_list = []
for i in treatments:
# Calculate the IQR and quantitatively determine if there are any potential outliers.
max_vol_data_df = maxvol_major_four_df.loc[maxvol_major_four_df['Drug Regimen'] == i]
max_vol_data_df = max_vol_data_df['Tumor Volume (mm3)']
quantiles = max_vol_data_df.quantile([0.25,.5,.75])
# Determine quartiles
Q1 = quantiles[0.25]
Q3 = quantiles[0.75]
iqr = round((Q3 - Q1), 2)
# Determine outliers using upper and lower bounds
lower_bound = round(Q1 - (1.5*iqr), 2)
upper_bound = round(Q3 + (1.5*iqr), 2)
outliers = max_vol_data_df.loc[(max_vol_data_df < lower_bound) | (max_vol_data_df > upper_bound)]
# append
iqr_list.append(iqr)
lower_list.append(lower_bound)
upper_list.append(upper_bound)
outliers_list.append(outliers)
tumor_vol_data.append(max_vol_data_df)
# Create a DataFrame to display IQR, Lower and Upper Bounds, and Outliers
results_df = pd.DataFrame({'Treatments': treatments,
'IQR': iqr_list,
'Lower Bound': lower_list,
'Upper Bound': upper_list,
'Outliers': outliers_list}).set_index('Treatments')
print(results_df)
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plt.figure(figsize=(10,5))
plt.title('Final Tumor Size per Treatment')
plt.ylabel('Tumor Vol (mm3)')
plt.boxplot(tumor_vol_data, labels=treatments)
###Output
IQR Lower Bound Upper Bound \
Treatments
Capomulin 7.78 20.71 51.83
Ramicane 9.10 17.91 54.31
Infubinol 11.48 36.83 82.75
Ceftamin 15.58 25.35 87.67
Outliers
Treatments
Capomulin Series([], Name: Tumor Volume (mm3), dtype: fl...
Ramicane Series([], Name: Tumor Volume (mm3), dtype: fl...
Infubinol 74 36.321346
Name: Tumor Volume (mm3), dtyp...
Ceftamin Series([], Name: Tumor Volume (mm3), dtype: fl...
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
s185_df = merged_drop_df.loc[merged_drop_df['Mouse ID'] == 's185']
s185_df = s185_df[['Timepoint', 'Tumor Volume (mm3)']].set_index('Timepoint')
s185_df.plot(color = 'red')
plt.title('Capomulin Effects on Mouse ID: s185')
plt.ylabel('Tumor Vol (mm3)')
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_df = merged_drop_df.loc[merged_drop_df['Drug Regimen'] == 'Capomulin']
capomulin_df.head(20)
capomulin_group_df = capomulin_df.groupby('Mouse ID')
capomulin_group_df = capomulin_group_df[['Weight (g)','Tumor Volume (mm3)']].mean()
plt.scatter(capomulin_group_df['Tumor Volume (mm3)'], capomulin_group_df['Weight (g)'])
plt.title('Weight (g) Vs. Avg. Tumor Volume (mm3)')
plt.xlabel('Tumor Volume (mm3)')
plt.ylabel('Weight (g)')
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
x_values = capomulin_group_df['Tumor Volume (mm3)']
y_values = capomulin_group_df['Weight (g)']
# Creat line regression
slope, intercept, rvalue, pvalue, stderr = linregress(x_values, y_values)
regression_values = x_values*slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
# Plot line graph with correlation and regression.
plt.figure(figsize=(10,8))
plt.scatter(x_values, y_values)
plt.plot(x_values, regression_values, "red")
plt.annotate(line_eq, (36,22), fontsize=15, color="red")
plt.annotate(f'R2 = {round(rvalue**2,3)}', (36,21), fontsize=15, color="red")
plt.title('Mice Weight (g) Vs. Avg. Tumor Volume (mm3) for Capomulin Regimen')
plt.xlabel('Tumor Volume (mm3)')
plt.ylabel('Weight (g)')
print(f"The r-squared is: {round(rvalue**2,3)}")
###Output
The r-squared is: 0.709
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pandas.
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata_df = pd.read_csv(mouse_metadata_path)
study_results_df = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_df = pd.merge(mouse_metadata_df, study_results_df)
# Display the data table for preview
combined_df
# Check the number of mice
mice_count = len(combined_df["Mouse ID"].unique())
mice_count
# Find any duplicate rows with the same Mouse ID's and Timepoints.
duplicate_rows = combined_df.loc[combined_df.duplicated()]
duplicate_rows
# Drop any duplicate rows
clean_data = combined_df.drop(combined_df[combined_df['Mouse ID'] == 'g989'].index)
clean_df = pd.DataFrame(clean_data)
# Recheck the number of mice
clean_mice_count = len(clean_df["Mouse ID"].unique())
clean_mice_count
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance,
# standard deviation, and SEM of the tumor volume for each regimen.
regimen_mice = clean_df.groupby(["Drug Regimen"])
regimen_mean = regimen_mice["Tumor Volume (mm3)"].mean()
regimen_median = regimen_mice["Tumor Volume (mm3)"].median()
regimen_varience = regimen_mice["Tumor Volume (mm3)"].var()
regimen_std = regimen_mice["Tumor Volume (mm3)"].std()
regimen_sem = regimen_mice["Tumor Volume (mm3)"].sem()
summary = {"Mean":regimen_mean,"Median":regimen_median,"Varience":regimen_varience,"Standard Deviation":regimen_std,"SEM":regimen_sem}
summary_df = pd.DataFrame(summary)
summary_df
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of datapoints for each drug regimen using pandas.
# There should be a single bar per regimen
Regimen_data_pandabr = mouse_study_df.groupby('Drug Regimen').count()['Tumor Volume (mm3)']
# Generate identical bar plot using pyplot instead of pandas.
DrugRegimen=['Caponimulin ','Ceftamin ',' Infubinol',' Ketapril',' Naftisol',' Placebo','Propriva ','Ramican ','Stelasyn ','Zoniferol ']
-
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate identical pie plot using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# HINT: Not all mice lived until timepoint 45
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
###Output
_____no_output_____
###Markdown
Calculate the quartiles and IQR and quantitatively determine if there are any potential outliers across all four treatment regimens.
###Code
# Calculate quartiles, IQR, and identify potential outliers for each regimen.
# One method to do this is the following, but you can use whatever method works for you.
##############################################################################
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
# There should be a single chart with four box plots inside it.
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a single mouse
# treated with Capomulin
mouse_time = clean_df["Timepoint"].loc[clean_df["Drug Regimen"]=="Capomulin"]
mouse_volume = clean_df["Tumor Volume (mm3)"].loc[clean_df["Drug Regimen"]=="Capomulin"]
plt.plot(mouse_time,mouse_volume)
plt.title("Tumor Volume vs Time Point")
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.show()
# Generate a scatter plot of average tumor volume vs. mouse weight
# for all mice in the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
study_data_complete = pd.merge(study_results, mouse_metadata, how="left", on="Mouse ID")
# Display the data table for preview
study_data_complete.head()
###Output
_____no_output_____
###Markdown
Checking the number of mice
###Code
# Checking the number of mice
# Get the unique values of 'Mouse ID' column
list_of_unique_mice = study_data_complete["Mouse ID"].unique()
length_of_unique_mice_list = len(list_of_unique_mice)
length_of_unique_mice_list
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mice = study_data_complete[study_data_complete.duplicated(subset=['Mouse ID', 'Timepoint'])]
duplicate_mice
# output should be ID of a single mouse
# dataframe that has duplicate
# Optional: Get all the data for the duplicate mouse ID.
duplicate_mouse_data = study_data_complete[study_data_complete["Mouse ID"]=="g989"]
duplicate_mouse_data
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_mouse_data = study_data_complete[study_data_complete["Mouse ID"]!="g989"]
clean_mouse_data.head()
# Checking the number of mice in the clean DataFrame.
number_of_mice = len(clean_mouse_data)
number_of_mice
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
mean = clean_mouse_data.groupby("Drug Regimen").mean()["Tumor Volume (mm3)"]
median = clean_mouse_data.groupby("Drug Regimen").median()["Tumor Volume (mm3)"]
variance = clean_mouse_data.groupby("Drug Regimen").var()["Tumor Volume (mm3)"]
standard_deviation = clean_mouse_data.groupby("Drug Regimen").std()["Tumor Volume (mm3)"]
standard_error = clean_mouse_data.groupby("Drug Regimen").sem()["Tumor Volume (mm3)"]
summary_table = pd.DataFrame({"Mean":mean,
"Median":median,
"Variance":variance,
"Standard Deviation":standard_deviation,
"Standard Error":standard_error,
})
summary_table
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
summary_table_aggregate = clean_mouse_data.groupby("Drug Regimen").agg({"Tumor Volume (mm3)":["mean", "median", "var", "std", "sem"]})
summary_table_aggregate
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
mice_in_treatment = drop_duplicate_mouse.groupby(["Drug Regimen"])["Mouse ID"].count()
mice_in_treatment.plot(kind = "bar", x = "Drug Regimen", y = mice_in_treatment)
mice_in_treatment
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
plt.bar(mice_in_treatment.index.values, mice_in_treatment.values)
plt.xticks(rotation=90)
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
distribution_of_sexes = drop_duplicate_mouse.groupby(["Sex"])["Sex"].count()
distribution_of_sexes.plot(kind = "pie",autopct="%1.1f%%")
distribution_of_sexes
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(distribution_of_sexes.values, labels=distribution_of_sexes.index.values,autopct="%1.1f%%")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
last_timepoint = drop_duplicate_mouse.groupby(["Mouse ID"])["Timepoint"].max()
last_timepoint = last_timepoint.reset_index()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
merge_tumor_volume = pd.merge(last_timepoint,drop_duplicate_mouse,on=["Mouse ID", "Timepoint"],how="left")
merge_tumor_volume
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
treatment = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
tumor_list = []
for drug in treatment:
tumor_volume = merge_tumor_volume.loc[merge_tumor_volume ["Drug Regimen"]==drug, "Tumor Volume (mm3)"]
tumor_list.append(tumor_volume)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
quartiles = tumor_volume.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of occupancy is: {lowerq}")
print(f"The upper quartile of occupancy is: {upperq}")
print(f"The interquartile range of occupancy is: {iqr}")
print(f"The the median of occupancy is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Determine outliers using upper and lower bounds
outlier = tumor_volume.loc[(tumor_volume < lower_bound) | (tumor_volume > upper_bound)]
outlier
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plt.boxplot(tumor_list, labels= treatment )
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
capomulin_mice = drop_duplicate_mouse.loc[drop_duplicate_mouse["Drug Regimen"] == "Capomulin"]
mouse_id = capomulin_mice.loc[drop_duplicate_mouse["Mouse ID"] == "j246"]
x = mouse_id["Timepoint"]
y = mouse_id["Tumor Volume (mm3)"]
plt.xlabel('Timepoint')
plt.ylabel('Tumor Volume (mm3)')
plt.plot(x,y)
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_average = capomulin_mice.groupby("Mouse ID").mean()
x = capomulin_average["Weight (g)"]
y = capomulin_average["Tumor Volume (mm3)"]
plt.scatter(x,y)
plt.xlabel('Mouse Weight')
plt.ylabel('Average Tumor Volume')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
x = capomulin_average["Weight (g)"]
y = capomulin_average["Tumor Volume (mm3)"]
corr=round(st.pearsonr(x,y)[0],2)
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x, y)
regress_values = x * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x,y)
plt.plot(x,regress_values,"r-")
plt.annotate(line_eq,(20,35),fontsize=15,color="red")
plt.xlabel("Weight (g)")
plt.ylabel("Tumor Volume (mm3)")
plt.show()
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
#Drugnames column
#Timepoint is an index for the dataframe
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
pct_changes = (tumor_pct_change["Capomulin"],
tumor_pct_change["Infubinol"],
tumor_pct_change["Ketapril"],
tumor_pct_change["Placebo"])
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
#def autolabel(rects):
# attach some text labels
# for rect in rects:
# height = rect.get_height()
# plt.text(rect.get_x()+rect.get_width()/2., 1.05*height, '%d'%int(height),
#ha='center', va='bottom')
#autolabel(rects1)
#autolabel(rects2)
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights 1)Final Tumor Volumes were significantly smaller for mice treated with Capomulin and Ramicane versus mice treated with Infubinol and Ceftamin.2)Average Tumor Volume for Ketapril was highest across the study, indicating that this treatment may have been least effective.3)Of the 4 regimens, only Infubinol had an outlier for Final Tumor Volume.4)There is a strong correlation(.82) between Mouse Weight and Average Tumor Volume in the study for Capomulin.5)The sex of the mice was near even. But more analysis could be done to see if sex plays a factor in Final Tumor Volume.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combine_data=pd.merge(mouse_metadata, study_results, on='Mouse ID', how='left')
# Display the data table for preview
combine_data.head()
#len(combine_data['Mouse ID'])
# Checking the number of mice.
len(combine_data)
combine_data.sort_values(['Mouse ID','Timepoint'],ascending=True).head()
combine_data.info()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_data=combine_data[combine_data.duplicated(['Mouse ID','Timepoint'])]
duplicate_data
# Optional: Get all the data for the duplicate mouse ID.
duplicate_data['Mouse ID'].unique()
combine_data.loc[combine_data['Mouse ID']=='g989']
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
combine_data_clean=combine_data.loc[combine_data['Mouse ID'] != 'g989']
combine_data_clean
# Checking the number of mice in the clean DataFrame.
len(combine_data_clean['Mouse ID'].unique())
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
groupByData = combine_data_clean.groupby(["Drug Regimen"])
groupByData
summaryDF = pd.DataFrame({
"Mean": groupByData["Tumor Volume (mm3)"].mean().map('{:.2f}'.format),
"Median": groupByData["Tumor Volume (mm3)"].median().map('{:.2f}'.format),
"Variance": groupByData["Tumor Volume (mm3)"].var().map('{:.2f}'.format),
"Standard Variance": groupByData["Tumor Volume (mm3)"].std().map('{:.2f}'.format),
"SEM": groupByData["Tumor Volume (mm3)"].sem().map('{:.2f}'.format)
})
summaryDF.head()
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
groupby_regimen=combine_data.groupby('Drug Regimen')
aggregate = groupby_regimen.agg(['mean','median','var','std','sem'])["Tumor Volume (mm3)"]
#aggregate['mean'].map('{:.2f}'.format)
aggregate
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
Total_mice_eachregimen1=pd.DataFrame(combine_data_clean.groupby('Drug Regimen').count()['Mouse ID'])
#Total_mice_eachregimen.reset_index(inplace=True)
#Total_mice_eachregimen.style.hide_index(inplace=True)
Total_mice_eachregimen1.plot(kind="bar", figsize=(5,3),rot=50)
plt.title("Total number of mice for each treatment regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of mice")
plt.savefig("../Images/pandas_bar.png")
plt.show()
plt.tight_layout()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
Total_mice_eachregimen=pd.DataFrame(combine_data.groupby('Drug Regimen').count()['Mouse ID'])
Total_mice_eachregimen.reset_index(inplace=True)
#x_axis=[]
#x_axis=Total_mice_eachregimen2['Drug Regimen
x_axis=np.arange(len(Total_mice_eachregimen['Drug Regimen']))
tickLocations = [value for value in x_axis]
y_axis=Total_mice_eachregimen2['Mouse ID']
plt.bar(x_axis, y_axis, color="b", align="center")
plt.xticks(tickLocations, Total_mice_eachregimen['Drug Regimen'], rotation="vertical")
plt.title("Total number of mice for each treatment regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of mice")
plt.savefig("../Images/pyplot_bar.png")
plt.show()
plt.tight_layout()
miceCount = combine_data_clean["Sex"].value_counts()
miceCount
# Generate a pie plot showing the distribution of female versus male mice using pandas
plt.figure()
miceCount.plot(kind="pie",rot=50, autopct='%1.1f%%', startangle=140)
plt.tight_layout()
plt.axis("equal")
plt.title("Distribution of female versus male mice")
plt.tight_layout()
plt.show()
plt.savefig("../Images/pandas_pie.png")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(miceCount.values, explode=(0.1,0), labels=miceCount.index.values, colors=["red","blue"],
autopct="%1.1f%%", shadow=True, startangle=150)
plt.axis("equal")
plt.title("Distribution of female versus male mice")
plt.show()
plt.savefig("../Images/pyplot_pie.png")
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
regimen_data = combine_data_clean[(combine_data_clean["Drug Regimen"] == "Capomulin") |
(combine_data_clean["Drug Regimen"] == "Ramicane") |
(combine_data_clean["Drug Regimen"] == "Infubinol") |
(combine_data_clean["Drug Regimen"] == "Ceftamin")]
# Start by getting the last (greatest) timepoint for each mouse
tumor_volume_df = regimen_data.groupby(regimen_data['Mouse ID']).agg({'Timepoint':['max']})
tumor_volume_df.columns = ['Timepoint']
tumor_volume_df
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
tumor_volume_df = tumor_volume_df.reset_index()
tumor_vol_final_df = pd.merge(tumor_volume_df, combine_data_clean, how="left", on=["Mouse ID", "Timepoint"])
tumor_vol_final_df
# Put treatments into a list for for loop (and later for plot labels)
treatments = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
# Create empty list to fill with tumor vol data (for plotting)
tumor_volumes = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
for drug in treatments:
tumor_vol_by_drug = tumor_vol_final_df['Tumor Volume (mm3)'].loc[tumor_vol_final_df['Drug Regimen'] == drug]
# add subset
# Determine outliers using upper and lower bounds
quartiles = tumor_vol_by_drug.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f'For {drug}, Interquartile Range (IQR) is {iqr}')
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f'For {drug}, values below {lower_bound} could be outliers')
print(f'For {drug}, values above {upper_bound} could be outliers\n')
tumor_volumes.append(tumor_vol_by_drug)
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig, ax = plt.subplots()
ax.set_title('Final Tumor Volume per Regimen')
ax.set_xticklabels(treatments)
ax.set_ylabel('Tumor Volume (mm3)')
ax.boxplot(tumor_volumes, flierprops=dict(markerfacecolor='g', marker='D'))
plt.show()
plt.savefig("../Images/boxplot.png")
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
combine_data_clean.loc[combine_data_clean['Drug Regimen']=='Capomulin'].head(20)
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
s185_mouse = combine_data_clean[['Timepoint', 'Tumor Volume (mm3)']].loc[(combine_data_clean['Drug Regimen'] == 'Capomulin') & (combine_data_clean['Mouse ID']=='s185')]
s185_mouse
plt.plot(s185_mouse['Timepoint'], s185_mouse['Tumor Volume (mm3)'], marker='o')
plt.title("Capomulin Regimen - Mouse (k403)")
plt.ylabel("Tumor Volume (mm3)")
plt.xlabel("Timepoint")
plt.show()
plt.savefig("../Images/lineplot.png")
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_avg_tumor_wgt = combine_data_clean.loc[combine_data_clean['Drug Regimen'] == 'Capomulin'].groupby(combine_data_clean['Timepoint']).agg({'Tumor Volume (mm3)':['mean'], 'Weight (g)':['mean']})
capomulin_avg_tumor_wgt.columns = ['Average Tumor Volume {mm3}', 'Average Mouse Weight (g)']
capomulin_avg_tumor_wgt.columns
weight=capomulin_avg_tumor_wgt['Average Mouse Weight (g)']
volume=capomulin_avg_tumor_wgt['Average Tumor Volume {mm3}']
plt.scatter(weight,volume)
plt.title('Avg Tumor Volume vs. Avg Mouse Weight')
plt.xlabel('Mouse Weight')
plt.ylabel('Tumor Volume')
plt.ylim(35,46)
plt.show()
plt.savefig("../Images/scatterplot.png")
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
correlation = st.pearsonr(volume,weight)
print(f'The correlation between Average Tumor Volume and Mouse Weight is {round(correlation[0],2)}')
# linear regression
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(weight,volume)
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
print(f'The linear regression equation is {line_eq}')
# plot line with scatter
volume = capomulin_avg_tumor_wgt['Average Tumor Volume {mm3}']
weight = capomulin_avg_tumor_wgt['Average Mouse Weight (g)']
plt.scatter(weight,volume)
plt.title('Avg Tumor Volume vs. Avg Mouse Weight')
plt.xlabel('Mouse Weight')
plt.ylabel('Tumor Volume')
plt.ylim(35,46)
# calculate regression values
reg_values = weight * slope + intercept
plt.plot(weight, reg_values, "r-")
plt.annotate(line_eq,(19.95,38),fontsize=15,color="red")
plt.show()
plt.savefig("../Images/regression.png")
###Output
The correlation between Average Tumor Volume and Mouse Weight is 0.82
The linear regression equation is y = 20.29x + -364.52
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights After analyzing and reviewing the data from Pymaceuticals dataset, I have listed 3 of my observations from figures and tables generated. 1. First of all, I realized that even tho the study started with 250 mouses. After 45 days, there were only 130 mouses that survived through out this period. The rest might have died! 2. Secondly, according to Summary statistics table, only Capomulin and Ramicane treatments actually worked on these mouses. The average tumor volume for Capomulin and Ramicane are 40.68 mm^3 and 40.22 mm^3 respectively. 3. Finally, according to the scatter plot and the linear regression line that was generated. There is a strong corroletion between mouses weights and their average tumor volume. In fact the r-squared value is 0.84 which is more than 0.7 which suggests the strong corroletion. Also, the slope of the regression line is 0.95. Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
from scipy.stats import linregress
from sklearn import datasets
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
data = pd.merge(mouse_metadata,study_results, on = "Mouse ID")
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Cleaning the data
drug_tumor = data[["Drug Regimen", "Tumor Volume (mm3)"]]
# Group by Drug Regimen
group_drug = drug_tumor.groupby(["Drug Regimen"])
# To find Mean, Median, Variance, STD and SEM of Tumor Volume
summary = round(group_drug.mean(),2)
# Rename
summary = summary.rename(columns={"Tumor Volume (mm3)":"Tumor Volume Mean"})
# To find Mean, Median, Variance, STD and SEM of Tumor Volume
summary["Tumor Volume Median"] = round(group_drug.median(),2)
summary["Tumor Volume Variance"] = round(group_drug.var(),2)
summary["Tumor Volume STD"] = round(group_drug.std(),2)
summary["Tumor Volume SEM"] = round(group_drug.sem(),2)
# Display results
summary
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
# To count data points
data_points = group_drug.count()
# Rename
data_points = data_points.rename(columns={"Tumor Volume (mm3)":"Tumor Volume Count"})
# Pandas bar plot
data_bar = data_points.plot(kind="bar", facecolor="blue")
# Title, x label, y label
plt.title('Number of Data Points for each Drug Regimen')
plt.xlabel('Drug Regimen')
plt.ylabel('Number of Data Points')
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
# To get and sort out drug regimen
drugs = data["Drug Regimen"].unique()
drugs.sort()
# Size of the plot
plt.figure(figsize=(10,5))
# Matplot bar plot
plt.bar(drugs, data_points["Tumor Volume Count"] , color="red", align="center",)
# Title, x label, y label
plt.title("Number of Data Points for each Drug Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Data Points")
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# To count how many male female mouses are in the study.
data_gender = pd.DataFrame(data["Sex"].value_counts())
# Pandas pie chart
gender_plot = data_gender.plot.pie(y='Sex', figsize=(5, 5), autopct="%1.1f%%")
# Title
plt.title("Distribution of female vs male mice")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
# Createing labels for matplot pie chart
labels = ["Male", "Female"]
# Matplot pie chart
plt.pie(data_gender['Sex'],autopct="%1.1f%%", labels = labels)
# Title and y label and legend
plt.title("Distribution of female vs male mice")
plt.ylabel("Sex")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Cleaning the data
data_reduced = data[["Mouse ID", "Drug Regimen", "Timepoint", "Tumor Volume (mm3)"]]
# To only get the final volume.
final_volume = data_reduced.loc[data_reduced["Timepoint"] == 45, :]
# To extract the data for mentioned four treatments
data_capomulin = final_volume.loc[final_volume["Drug Regimen"] == "Capomulin"]
data_ramicane = final_volume.loc[final_volume["Drug Regimen"] == "Ramicane"]
data_infubinol = final_volume.loc[final_volume["Drug Regimen"] == "Infubinol"]
data_ceftamin = final_volume.loc[final_volume["Drug Regimen"] == "Ceftamin"]
# Rename
data_capomulin = data_capomulin.rename(columns={"Tumor Volume (mm3)":"Final Tumor Volume"})
data_ramicane = data_ramicane.rename(columns={"Tumor Volume (mm3)":"Final Tumor Volume"})
data_infubinol = data_infubinol.rename(columns={"Tumor Volume (mm3)":"Final Tumor Volume"})
data_ceftamin = data_ceftamin.rename(columns={"Tumor Volume (mm3)":"Final Tumor Volume"})
# Capomulin Analysis
# Final Volume
final_v_cap = data_capomulin["Final Tumor Volume"]
# Calculating the IQR and quantitatively
quartiles_cap = final_v_cap.quantile([.25,.5,.75])
lowerq_cap = quartiles_cap[0.25]
upperq_cap = quartiles_cap[0.75]
iqr_cap = upperq_cap-lowerq_cap
# Printing the results
print(f"The lower quartile of final tumor volume for Capomulin treatment regimen is: {lowerq_cap}")
print(f"The the median of final tumor volume for Capomulin treatment regimen is: {quartiles_cap[0.5]} ")
print(f"The upper quartile of final tumor volume for Capomulin treatment regimen is: {upperq_cap}")
print(f"The interquartile range of final tumor volume for Capomulin treatment regimen is: {iqr_cap}")
# To see if there is an outlier
lower_bound_cap = lowerq_cap - (1.5*iqr_cap)
upper_bound_cap = upperq_cap + (1.5*iqr_cap)
print(f"Values below {lower_bound_cap} could be outliers.")
print(f"Values above {upper_bound_cap} could be outliers.")
print(f"Since the minimum final tumor volume is {final_v_cap.min()} which is greater than {lower_bound_cap} and the maximum final tumor volume is {final_v_cap.max()} which is less than {upper_bound_cap}. Therefore, there is no outlier")
# Ramicanein Analysis
# Final Volume
final_v_ram = data_ramicane["Final Tumor Volume"]
# Calculating the IQR and quantitatively
quartiles_ram = final_v_ram.quantile([.25,.5,.75])
lowerq_ram = quartiles_ram[0.25]
upperq_ram = quartiles_ram[0.75]
iqr_ram = upperq_ram-lowerq_ram
# Printing the results
print(f"The lower quartile of final tumor volume for Ramicane treatment regimen is: {lowerq_ram}")
print(f"The the median of final tumor volume for Ramicane treatment regimen is: {quartiles_ram[0.5]} ")
print(f"The upper quartile of final tumor volume for Ramicane treatment regimen is: {upperq_ram}")
print(f"The interquartile range of final tumor volume for Ramicane treatment regimen is: {iqr_ram}")
# To see if there is an outlier
lower_bound_ram = lowerq_ram - (1.5*iqr_ram)
upper_bound_ram = upperq_ram + (1.5*iqr_ram)
print(f"Values below {lower_bound_ram} could be outliers.")
print(f"Values above {upper_bound_ram} could be outliers.")
print(f"Since the minimum final tumor volume is {final_v_ram.min()} which is greater than {lower_bound_ram} and the maximum final tumor volume is {final_v_ram.max()} which is less than {upper_bound_ram}. Therefore, there is no outlier")
# Infobunol Analysis
# Final Volume
final_v_inf = data_infubinol["Final Tumor Volume"]
# Calculating the IQR and quantitatively
quartiles_inf = final_v_inf.quantile([.25,.5,.75])
lowerq_inf = quartiles_inf[0.25]
upperq_inf = quartiles_inf[0.75]
iqr_inf = upperq_inf-lowerq_inf
# Printing the results
print(f"The lower quartile of final tumor volume for Infobunol treatment regimen is: {lowerq_inf}")
print(f"The the median of final tumor volume for Infobunol treatment regimen is: {quartiles_inf[0.5]} ")
print(f"The upper quartile of final tumor volume for Infobunol treatment regimen is: {upperq_inf}")
print(f"The interquartile range of final tumor volume for Infobunol treatment regimen is: {iqr_inf}")
# To see if there is an outlier
lower_bound_inf = lowerq_inf - (1.5*iqr_inf)
upper_bound_inf = upperq_inf + (1.5*iqr_inf)
print(f"Values below {lower_bound_inf} could be outliers.")
print(f"Values above {upper_bound_inf} could be outliers.")
print(f"Since the minimum final tumor volume is {final_v_inf.min()} which is greater than {lower_bound_inf} and the maximum final tumor volume is {final_v_inf.max()} which is less than {upper_bound_inf}. Therefore, there is no outlier")
# Ceftamin Analysis
# Final Volume
final_v_cef = data_ceftamin["Final Tumor Volume"]
# Calculating the IQR and quantitatively
quartiles_cef = final_v_cef.quantile([.25,.5,.75])
lowerq_cef = quartiles_cef[0.25]
upperq_cef = quartiles_cef[0.75]
iqr_cef = upperq_cef-lowerq_cef
# Printing the results
print(f"The lower quartile of final tumor volume for Ceftamin treatment regimen is: {lowerq_cef}")
print(f"The the median of final tumor volume for Ceftamin treatment regimen is: {quartiles_cef[0.5]} ")
print(f"The upper quartile of final tumor volume for Ceftamin treatment regimen is: {upperq_cef}")
print(f"The interquartile range of final tumor volume for Ceftamin treatment regimen is: {iqr_cef}")
# To see if there is an outlier
lower_bound_cef = lowerq_cef - (1.5*iqr_cef)
upper_bound_cef = upperq_cef + (1.5*iqr_cef)
print(f"Values below {lower_bound_cef} could be outliers.")
print(f"Values above {upper_bound_cef} could be outliers.")
print(f"Since the minimum final tumor volume is {final_v_cef.min()} which is greater than {lower_bound_cef} and the maximum final tumor volume is {final_v_cef.max()} which is less than {upper_bound_cef}. Therefore, there is no outlier")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
allfour = [final_v_cap, final_v_cap,final_v_cap,final_v_cap]
fig1, ax1 = plt.subplots()
ax1.set_title('Final Tumor Volume across all four treatment regimen')
ax1.set_ylabel('Final Tumor Volume mm3')
ax1.boxplot(allfour)
plt.show()
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# To reduce the data to get data for a single mouse with Capomulin treatment
single_mouse = data.loc[data["Mouse ID"] == "s185"]
# Matplot line chart
single_mouse_line, = plt.plot(single_mouse["Timepoint"],single_mouse["Tumor Volume (mm3)"] , marker="+",color="blue", linewidth=1, label="Fahreneit")
# Title, x label, y label
plt.title("Time points vs Tumor Volume for Mouse ID: s185. Drug Regimen: Capomulin")
plt.xlabel("Time (Days)")
plt.ylabel("Tumor Volume (mm3)")
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
# To reduce the data to only get data for mices under Capomulin treatment
capomulin = data.loc[data["Drug Regimen"] == "Capomulin"]
# Group by Mouse ID
group_capomulin = capomulin.groupby(["Mouse ID"])
# The store weights and Tumor Volume in lists
weights = group_capomulin["Weight (g)"].mean()
tumor_v = group_capomulin["Tumor Volume (mm3)"].mean()
# To get regressian variables
(slope, intercept, rvalue, pvalue, stderr) = linregress(weights, tumor_v)
# Equation of the regression line
regress_values = weights * slope + intercept
# To plot the scatter plot using matplot
plt.scatter(weights,tumor_v , marker="o", facecolors="red", edgecolors="black")
# To plot the refression line
plt.plot(weights,regress_values,"r-")
# Title, x label, y label
plt.title("Mouse Weight vs Average Tumor Volume for Capomul Regimen")
plt.xlabel("Mouse Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
# To print r-squared or correlation coefficient using two different methods
print(f"The r-squared is: {round(rvalue,2)}")
print(f"The correlation coefficient between weights and average tumor volume is {round(st.pearsonr(weights,tumor_v)[0],2)}")
# To print the equation of the regression line
print(f"The equation of linear regression line is y = {round(slope,2)}*x + {round(intercept,2)}")
###Output
The r-squared is: 0.84
The correlation coefficient between weights and average tumor volume is 0.84
The equation of linear regression line is y = 0.95*x + 21.55
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
from sklearn import datasets
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_df = pd.merge(mouse_metadata, study_results, how='outer', on='Mouse ID')
# Display the data table for preview
combined_df.head()
# Checking the number of mice.
total_mouse = len(combined_df["Mouse ID"].unique())
total_mouse
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
dup_mouse = combined_df.loc[combined_df.duplicated(subset=['Mouse ID', 'Timepoint',]),'Mouse ID'].unique()
dup_mouse
# Optional: Get all the data for the duplicate mouse ID.
dup_mouse = combined_df.loc[combined_df['Mouse ID'] == "g989",:]
dup_mouse
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
data_clean_df = combined_df[combined_df["Mouse ID"].isin(dup_mouse)==False]
data_clean_df.head()
# Checking the number of mice in the clean DataFrame.
total_mouse_clean = len(data_clean_df["Mouse ID"].unique())
total_mouse_clean
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
stat_data_clean_df = data_clean_df.groupby (["Drug Regimen"])
stat_data_clean_df.count().head()
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
drug_regime_mean = round(stat_data_clean_df["Tumor Volume (mm3)"].mean(),2)
drug_regime_median= round(stat_data_clean_df["Tumor Volume (mm3)"].median(),2)
drug_regime_variance = round(stat_data_clean_df["Tumor Volume (mm3)"].var(),2)
drug_regime_stdr_dev = round(stat_data_clean_df["Tumor Volume (mm3)"].std(),2)
regimen_sem = round(data_clean_df["Tumor Volume (mm3)"].sem(),2)
# Assemble the resulting series into a single summary dataframe.
summary_df = pd.DataFrame({"Mean":drug_regime_mean,
"Median":drug_regime_median,
"Variance": drug_regime_variance,
"Standard Deviation": drug_regime_stdr_dev,
"SEM": regimen_sem})
summary_df
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
summary_II = round(data_clean_df.groupby('Drug Regimen').agg(['mean','median','var','std','sem'])["Tumor Volume (mm3)"],2)
summary_II.head(12)
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
total_measurments = data_clean_df.groupby(["Drug Regimen"]).count()["Mouse ID"]
plot_pandas = total_measurments.plot.bar(color='r')
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Mice")
plt.title("Number of Measurment")
plt.savefig("../Images/Number of Measurments.png", bbox_inches = "tight")
plt.tight_layout()
plt.show()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
count_mouse = data_clean_df.groupby(["Drug Regimen"]).count()["Mouse ID"]
count_mouse
mouse_list =data_clean_df.groupby(["Drug Regimen"])["Mouse ID"].count().tolist()
mouse_list
x_axis = np.arange(len(count_mouse))
fig1, ax1 = plt.subplots()
plt.bar(x_axis, mouse_list, color='r', alpha=0.8, align='center')
tick_locations = [value for value in x_axis]
plt.xticks(tick_locations, ['Capomulin', 'Ceftamin', 'Infubinol', 'Ketapril', 'Naftisol', 'Placebo', 'Propriva', 'Ramicane', 'Stelasyn', 'Zoniferol'], rotation='vertical')
plt.xlim(-0.75, len(x_axis)-0.25)
plt.ylim(0, max(mouse_list)+10)
plt.title("Number of Mouses per Treatment")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Mouses")
plt.savefig("../Images/Number of Mouses per Treatment.png", bbox_inches = "tight")
# Generate a pie plot showing the distribution of female versus male mice using pandas
groupby_gender = data_clean_df.groupby(["Mouse ID","Sex"])
groupby_gender
gender_df = pd.DataFrame(groupby_gender.size())
mouse_gender = pd.DataFrame(gender_df.groupby(["Sex"]).count())
mouse_gender.columns = ["Total Count"]
mouse_gender["Percentage of Sex"] = round((100*(mouse_gender["Total Count"]/mouse_gender["Total Count"].sum())),2)
mouse_gender
# Generate a pie plot showing the distribution of female versus male mice using pandas
colors = ["pink", "blue"]
explode = (0.1, 0)
mouse_gender.plot.pie(y='Total Count', colors = colors, startangle=140, explode = explode, shadow = True, autopct="%1.1f%%")
plt.title("Distribution of Female Vs Male")
plt.ylabel(" ")
plt.axis("equal")
plt.savefig("../Images/Distribution of Female Vs Male.png", bbox_inches = "tight")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
labels = ["Female","Male"]
sizes = [49.799197,50.200803]
colors = ['pink', 'blue']
explode = (0.1, 0)
plt.pie(sizes, explode=explode,labels=labels, colors=colors, autopct="%1.1f%%", shadow=True, startangle=140,)
plt.title("Distribution of Female Vs Male")
plt.axis("equal")
plt.savefig("../Images/Distribution of Female Vs Male Pyplot", bbox_inches = "tight")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
max_timepoint = data_clean_df.groupby(['Mouse ID'])['Timepoint'].max()
max_timepoint
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
maxtimepoint_df = pd.merge(max_timepoint , data_clean_df, on=(["Mouse ID","Timepoint"]))
maxtimepoint_df.head()
# Put treatments into a list for for loop (and later for plot labels)
drug_regimen = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
# Create empty list to fill with tumor vol data (for plotting)
tumor_volume = []
total_drug_regimen = len(drug_regimen)
quartile = [0,1,2,3]
lowerq = [0,1,2,3]
upperq = [0,1,2,3]
iqr = [0,1,2,3]
lower_bound = [0,1,2,3]
upper_bound = [0,1,2,3]
# Locate the rows which contain mice on each drug and get the tumor volumes
for drug in drug_regimen:
drug_regimen_value = maxtimepoint_df.loc[maxtimepoint_df["Drug Regimen"] == drug]
tumor = drug_regimen_value['Tumor Volume (mm3)']
tumor_volume.append(tumor)
for i in range(total_drug_regimen):
quartile[i] = tumor_volume[i].quantile([.25,.5,.75])
lowerq[i] = quartile[i][0.25]
upperq[i] = quartile[i][0.75]
iqr[i] = upperq[i]-lowerq[i]
print(f"{drug_regimen[i]} treatment lower quartile is: {lowerq[i]}")
print(f"{drug_regimen[i]} treatment upper quartile is: {upperq[i]}")
print(f"{drug_regimen[i]} treatment interquartile range is: {iqr[i]}")
lower_bound[i] = lowerq[i] - (1.5*IQR[i])
upper_bound[i] = upperq[i] + (1.5*IQR[i])
print(f"{drug_regimen[i]} treatment Values below {lower_bound[i]} could be outliers.")
print(f"{drug_regimen[i]} treatment Values above {upper_bound[i]} could be outliers.")
print("")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
line_plot = capomulin_df.loc[capomulin_df["Mouse ID"] == "b128"]
line_plot.head()
x_axis = line_plot["Timepoint"]
y_axis = line_plot["Tumor Volume (mm3)"]
fig1, ax1 = plt.subplots()
plt.title("Mouse b128 Treated With Capomulin")
plt.plot(x_axis, y_axis, linewidth=1, markersize=8,marker="o",color="purple")
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.savefig("../Images/Mouse b128 Treated With Capomulin", bbox_inches = "tight")
plt.show()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
capomulin_average_volume =capomulin_df.groupby(['Mouse ID']).mean()
marker_size=15
plt.scatter(capomulin_average_volume['Weight (g)'],capumulin_average_volume["Tumor Volume (mm3)"],s=75, color="purple")
plt.title("Capomulin Tumor Volume vs. Mouse Weight")
plt.xlabel("Weight (g)")
plt.ylabel("Averag Tumor Volume (mm3)")
plt.savefig("../Images/Capomulin Tumor Volume vs. Mouse Weight.png", bbox_inches = "tight")
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
correlation=round(st.pearsonr(capomulin_average_volume["Weight (g)"],capomulin_average_volume["Tumor Volume (mm3)"])[0],2)
print(f"The correlation between mouse weight and average tumor volume is {correlation}")
x_values = capomulin_average_volume["Weight (g)"]
y_values = capomulin_average_volume["Tumor Volume (mm3)"]
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=14,color="red")
plt.title("Linear Regression Equation")
plt.xlabel("Average Tumor Volume (mm3)")
plt.ylabel("Weight (g)")
print(f"The r-squared is: {rvalue**2}")
plt.savefig("../Images/Linear Regression.png", bbox_inches = "tight")
plt.show()
###Output
The r-squared is: 0.7088568047708717
###Markdown
Observations and Insights 1.Capomulin and Ramicane demonstrated more effectiveness than any other treatment regimen. When analyzing the final tumor volume by regimen, Capomulin is the first best result, with most mice showing the lowest tumor volume and,Ramicane comes in second place.2.Capomulin mice had a continuous reduction in tumor volume during the treatment period, the tumor decreased about90% at the end. There is also a positive correlation between the weight of the rats and the average volume of the tumor, rats with lower weight it also has less tumor volume.3.There were outliers that affected the results of the Infubinol regimen, however, this might not be the reason fornegative results. Both Infubinol and Ceftamin showed the median tumor volume around 40% greater than Capomulin and Ramicane. 4.There were no problems in the data set, except for a mouse that had duplicate entries and therefore this mouse was removed from the set.
###Code
%matplotlib inline
# Dependencies and Setup
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_df = pd.merge(mouse_metadata,study_results,how="outer",on="Mouse ID")
# Display the data table for preview
combined_df.head()
# Checking the number of mice.
count_mice = combined_df["Mouse ID"].count()
count_mice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mouse_id = combined_df.loc[combined_df.duplicated(subset=['Mouse ID',"Timepoint"]),"Mouse ID"].unique()
duplicate_mouse_id
# Optional: Get all the data for the duplicate mouse ID.
duplicated_mouse = combined_df.loc[combined_df['Mouse ID'] == 'g989',:]
duplicated_mouse
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
#cleaned_df = combined_df[~combined_df.duplicated(subset=['Mouse ID',"Timepoint"])]
#cleaned_df = combined_df[combined_df.duplicated['Mouse ID'].isin(duplicate_mouse_id)==False]
cleaned_df = combined_df.loc[combined_df['Mouse ID'] != 'g989',:]
cleaned_df.head()
# Checking the number of mice in the clean DataFrame.
count_mice_cleaned_df = cleaned_df["Mouse ID"].count()
count_mice_cleaned_df
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
grouped_drug = cleaned_df.groupby("Drug Regimen")
regimen_mean = grouped_drug["Tumor Volume (mm3)"].mean()
regimen_median = grouped_drug["Tumor Volume (mm3)"].median()
regimen_var = grouped_drug["Tumor Volume (mm3)"].var()
regimen_std = grouped_drug["Tumor Volume (mm3)"].std()
regimen_sem = grouped_drug["Tumor Volume (mm3)"].sem()
summary_stats = pd.DataFrame({"Mean":regimen_mean,
"Median":regimen_median,
"Variance":regimen_var,
"Std Deviation":regimen_std,
"SEM":regimen_sem})
summary_stats
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
stats = cleaned_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].aggregate(['mean','median','var','std','sem'])
stats
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pandas.
total_mice_per_drug = cleaned_df.groupby("Drug Regimen")["Mouse ID"].nunique()
# Generate the bar plot
total_mice_per_drug.plot(kind='bar', color="red", title="Number of Unique Mice per Drug Regimen",figsize=(7,5))
plt.ylabel("Number of Unique Mice")
plt.tight_layout()
# Save the figure
plt.savefig("output_data/total_mice_per_drug.png")
# Diplay plot
plt.show()
# Converting series to DF
df_mice = total_mice_per_drug.to_frame()
df_mice.index.name = 'Drug Regimen'
df_mice.reset_index(level=None, drop=False, inplace=True)
df_mice.head()
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pyplot.
x_axis = np.arange(0,len(df_mice))
ticks = [value for value in x_axis]
plt.figure(figsize=(8,4))
plt.bar(x_axis, df_mice["Mouse ID"], color="r", align="center")
plt.xticks(ticks,df_mice["Drug Regimen"], rotation="vertical")
# Set the limits of the x and y axis
plt.xlim(-0.75, len(x_axis))
plt.ylim(0, max(df_mice["Mouse ID"])+5)
# Give the chart a title, x label, and y label, give proper layot
plt.title("Number of Unique Mince per Drug Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number Unique of Mice")
plt.tight_layout()
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_distr = cleaned_df.groupby("Sex")["Mouse ID"].count()
# Set details for the plot
colors=['red','blue']
plt.figure()
gender_distr.plot(kind='pie', figsize=(5, 5),title="Distribution of Female Vs. Male Mice",autopct="%1.1F%%", colors=colors)
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
# Convert series into dataframe
sex_df = gender_distr.to_frame()
sex_df.index.name = 'Sex'
sex_df.reset_index(level=None, drop=False, inplace=True)
renamed_sex_df = sex_df.rename(columns={"Sex":"Sex", "Mouse ID":"Distribution of Mice"})
renamed_sex_df
# Passing plot details
sex = ["Female","Male"]
count = [930,958]
x_axis = np.arange(0,len(sex))
explode = (0.1,0)
# Tell matplotlib to create a pie chart based upon the above data
plt.figure()
colors= ['red','blue']
plt.title("Distribution of Female Vs. Male Mice")
plt.pie(count,labels=sex,colors=colors,autopct="%1.1f%%",shadow=True, explode=explode)
# Create axes which are equal so we have a perfect circle
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
max_timepoint = cleaned_df.groupby(["Mouse ID"])['Timepoint'].max()
max_timepoint = max_timepoint.reset_index()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
max_tp_and_tumor_vol = max_timepoint.merge(cleaned_df, on=["Mouse ID","Timepoint"], how='left')
max_tp_and_tumor_vol
# Put treatments into a list for for loop (and later for plot labels)
drug_regimen = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
# Create empty list to fill with tumor vol data (for plotting)
tumor_volume = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for drug in drug_regimen:
final_tumor_volume = max_tp_and_tumor_vol.loc[max_tp_and_tumor_vol['Drug Regimen'] == drug, "Tumor Volume (mm3)"]
tumor_volume.append(final_tumor_volume)
quartiles = final_tumor_volume.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
outlier_tumor_vol = final_tumor_volume.loc[(final_tumor_volume < lower_bound) | (final_tumor_volume > upper_bound)]
print(f"The lower quartile of tumor volume is: {lowerq}")
print(f"The upper quartile of tumor volume is: {upperq}")
print(f"The interquartile range of tumor volume is: {iqr}")
print(f"The median of tumor volume is: {quartiles[0.5]} ")
print(f"Outliers using upper and lower bounds: {outlier_tumor_vol}")
print('-------------------------')
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
red_square = dict(markerfacecolor='r', marker='s')
plt.boxplot(tumor_volume, labels=drug_regimen, notch="True", flierprops=red_square)
plt.title("Final Tumor Volume by Regimens ")
plt.ylabel('Final Tumor Volume')
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Select a mouse treated with Capomulin
a_mouse_capomulin = cleaned_df.loc[cleaned_df['Mouse ID'] == 's185',:]
# Getting data for the plot
a_mouse_capomulin = a_mouse_capomulin[['Tumor Volume (mm3)','Timepoint']]
# Set variables
avg_tumor = a_mouse_capomulin['Tumor Volume (mm3)']
timepoint = a_mouse_capomulin['Timepoint']
# Plot the line that will be used to track a mouse's treatment over the days
plt.plot(timepoint,avg_tumor, c='y')
# Give the plot a title, x label, and y label, give proper layout
plt.title('Capomulin: Tumor Volume Vs. Timepoint')
plt.xlabel('Days')
plt.ylabel('Tumor Volume')
plt.xticks(np.arange(min(timepoint), max(timepoint)+1, 5))
plt.legend(['Tumor Volume (mm3)'])
plt.tight_layout()
plt.show()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
# Getting data for the plot
capomulin_vol_weight = cleaned_df.loc[(cleaned_df['Drug Regimen'] == 'Capomulin')]
capomulin_avg = capomulin_vol_weight.groupby(['Mouse ID']).mean()
# Set variables
mouse_weight = capomulin_avg['Weight (g)']
avg_tumor= capomulin_avg['Tumor Volume (mm3)']
# Generate the scatter plot
plt.scatter(mouse_weight,avg_tumor,marker="o", color='orange')
# Give the plot a legend, atitle, x label, and y label
plt.legend(['Tumor Volume (mm3)'],loc='lower right')
plt.title('Capomulin: Average Tumor Volume Vs. Mouse Weight')
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Avg Tumor Volume (mm3)')
# Set the limits of the x and y axis
plt.xlim(min(mouse_weight) -2, max(mouse_weight)+2)
plt.ylim(min(avg_tumor) -2, max(avg_tumor)+2)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
print(f'The correlation coefficient between mouse weight and average tumor volume is {round(st.pearsonr(mouse_weight,avg_tumor)[0],2)} , for the Capomulin regimen.')
# Perform a linear regression on mouse weight and average tumor volume
slope, intercep, rvalue, pvalue, std_err = stats.linregress(mouse_weight,avg_tumor)
# Create equation of line
line_eq = slope * mouse_weight + intercep
# Plotting scatter and linear model for weight versus tumor volume
plt.scatter(mouse_weight,avg_tumor, marker="o", color='orange')
plt.plot(mouse_weight,line_eq,"--",linewidth=1, color="g")
# Give the plot a legend, atitle, x label, and y label
plt.legend(['Tumor Volume (mm3)'],loc='lower right')
plt.title('Capomulin: Average Tumor Volume Vs. Mouse Weight')
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Avg Tumor Volume (mm3)')
# Set the limits of the x and y axis
plt.xlim(min(mouse_weight) -2, max(mouse_weight)+2)
plt.ylim(min(avg_tumor) -2, max(avg_tumor)+2)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights The more effective treatments: Capomulin and Ramicane have more data points.There's a positive correlation between mouse weight and tumor volume.There's a even number of male and female mice in the study.The more effective treatments also had smaller variances.Ketapril was the worst treatment. Dependencies
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
mouse_study_df = pd.merge(mouse_metadata, study_results, on="Mouse ID", how="outer")
mouse_study_df
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
grouped_mouse_study_df = mouse_study_df[["Drug Regimen", "Tumor Volume (mm3)"]].groupby("Drug Regimen")
#Finding mean and median and merging them
mean = grouped_mouse_study_df.mean()
median = grouped_mouse_study_df.median()
Summary_statistics_df = pd.merge(mean, median, on="Drug Regimen", suffixes= [" Mean", " Median"])
# Finding variance and std and merging them
variance = grouped_mouse_study_df.var()
Standard_deviation = grouped_mouse_study_df.std()
var_std_table = pd.merge(variance, Standard_deviation, on="Drug Regimen", suffixes=[" Variance", " Standard Deviation"])
Summary_statistics_df = pd.merge(Summary_statistics_df, var_std_table, on="Drug Regimen")
# Finding SEM and merging it
SEM = grouped_mouse_study_df.sem()
Summary_statistics_df = pd.merge(Summary_statistics_df, SEM, on="Drug Regimen")
Summary_statistics_df.rename(columns={"Tumor Volume (mm3)":"Tumor Volume (mm3) SEM"}, inplace=True)
Summary_statistics_df
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
mouse_study_df["Drug Regimen"].value_counts().plot(kind="bar", color = "blue", title="Number of Data Points per Treatment Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Data Points")
plt.xlim(-0.75, 9.75)
plt.ylim(0, 260)
plt.tight_layout()
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
plt.bar(mouse_study_df["Drug Regimen"].unique(), mouse_study_df["Drug Regimen"].value_counts(), color ="blue", align="center", width=0.5)
plt.xticks(rotation="vertical")
plt.title("Number of Data Points per Treatment Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Data Points")
plt.xlim(-0.75, 9.75)
plt.ylim(0, 260)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
mouse_study_df["Sex"].value_counts().plot(kind="pie", colors=["blue", "red"], shadow=True, autopct="%1.1f%%",
title="Sex Distribution of study")
plt.legend(loc="best")
plt.ylabel("")
plt.axis("equal")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(mouse_study_df["Sex"].value_counts(), labels= mouse_study_df["Sex"].unique(), colors=["blue", "red"], shadow=True,
autopct="%1.1f%%")
plt.title("Sex Distribution of study")
plt.legend(loc="best")
plt.axis("equal")
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
mouse_ids = mouse_study_df["Mouse ID"].unique()
mouse_ids
last_timepoints = pd.DataFrame({"Mouse ID":[], "Drug Regimen":[], "Sex":[], "Age_months":[], "Weight (g)":[],
"Timepoint":[], "Tumor Volume (mm3)":[], "Metastatic Sites":[]})
for mouse in mouse_ids:
sample_mouse = mouse_study_df.loc[mouse_study_df["Mouse ID"] == mouse,:]
sample_mouse = sample_mouse.sort_values(by="Timepoint", ascending=True)
last_timepoint = sample_mouse.iloc[-1,:]
last_timepoints = last_timepoints.append(last_timepoint, ignore_index=True)
last_timepoints
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
last_timepoints_of_top_regimens = last_timepoints.loc[((last_timepoints["Drug Regimen"] == "Capomulin") | \
(last_timepoints["Drug Regimen"] == "Ramicane") | \
(last_timepoints["Drug Regimen"] == "Infubinol") | \
(last_timepoints["Drug Regimen"] == "Ceftamin")),
["Mouse ID", "Drug Regimen", "Tumor Volume (mm3)"]]
last_timepoints_of_top_regimens
quartiles = last_timepoints_of_top_regimens["Tumor Volume (mm3)"].quantile([0.25,0.5,0.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
lowerbound = lowerq - (1.5*iqr)
upperbound = upperq + (1.5*iqr)
outliers = last_timepoints_of_top_regimens.loc[((last_timepoints_of_top_regimens["Tumor Volume (mm3)"] < lowerbound) | \
(last_timepoints_of_top_regimens["Tumor Volume (mm3)"] > upperbound)),:]
if len(outliers) > 0:
print("There are potential outliers")
else:
print("There are no outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
Capomulin = last_timepoints_of_top_regimens.loc[last_timepoints_of_top_regimens["Drug Regimen"] == "Capomulin",["Tumor Volume (mm3)"]]
Ramicane = last_timepoints_of_top_regimens.loc[last_timepoints_of_top_regimens["Drug Regimen"] == "Ramicane",["Tumor Volume (mm3)"]]
Infubinol = last_timepoints_of_top_regimens.loc[last_timepoints_of_top_regimens["Drug Regimen"] == "Infubinol",["Tumor Volume (mm3)"]]
Ceftamin = last_timepoints_of_top_regimens.loc[last_timepoints_of_top_regimens["Drug Regimen"] == "Ceftamin",["Tumor Volume (mm3)"]]
top_regimens = [Capomulin["Tumor Volume (mm3)"], Ramicane["Tumor Volume (mm3)"], Infubinol["Tumor Volume (mm3)"],
Ceftamin["Tumor Volume (mm3)"]]
red_tri = dict(markerfacecolor="red", markeredgecolor= "red", marker= "1")
fig, ax1 = plt.subplots(sharey=True)
fig.suptitle("Final Tumor Size across top Treatment Regimens")
ax1.boxplot(top_regimens, flierprops=red_tri)
ax1.set_ylabel("Final Tumor Sizes")
ax1.set(xticklabels=["Capomulin", "Ramicane","Infubinol", "Ceftamin"])
ax1.set_ylim(15, 80)
plt.show()
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
mouse = mouse_study_df.loc[mouse_study_df["Drug Regimen"] == "Capomulin",
["Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
mouse_id = input(f"Which mouse would you like to look for? {mouse['Mouse ID'].unique()} ")
# mouse_id = "s185"
mouse = mouse.loc[mouse["Mouse ID"] == mouse_id, ["Timepoint", "Tumor Volume (mm3)"]]
plt.plot(mouse["Timepoint"], mouse["Tumor Volume (mm3)"], color = "blue", marker="D")
plt.title(f"The tumor size of mouse {mouse_id} over time")
plt.ylabel("Tumor Volume (mm3)")
plt.xlabel("Timepoint")
plt.xlim(-2, 47)
plt.ylim(min(mouse["Tumor Volume (mm3)"])-5, max(mouse["Tumor Volume (mm3)"])+5)
plt.xticks(np.arange(0,50,5))
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
mouse_weight = mouse_study_df.loc[mouse_study_df["Drug Regimen"] == "Capomulin",
["Mouse ID", "Weight (g)", "Tumor Volume (mm3)"]]
mouse_weight = mouse_weight.groupby("Mouse ID").mean()
plt.scatter(mouse_weight["Weight (g)"], mouse_weight["Tumor Volume (mm3)"], marker='o', color='green', label="Tumor Volume by Mouse Weight")
plt.title("Average Tumor Volume vs Mouse Weight")
plt.ylabel("Tumor Volume (mm3)")
plt.xlabel("Mouse Weight (g)")
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(mouse_weight["Weight (g)"], mouse_weight["Tumor Volume (mm3)"])
regress_value = slope * mouse_weight["Weight (g)"] + intercept
plt.plot(mouse_weight["Weight (g)"], regress_value, color="red", label="line of best fit")
plt.legend(loc="best")
plt.show()
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
correlation = st.pearsonr(mouse_weight["Weight (g)"], mouse_weight["Tumor Volume (mm3)"])
round(correlation[0],2)
# Pie plot of gender distribution at beginning of trial
gender_survival_at_first = mouse_study_df.loc[(mouse_study_df["Timepoint"] == 0),:]
gender_survival_at_first["Sex"].value_counts().plot(kind="pie", colors=["blue", "red"], shadow=True, autopct="%1.1f%%",
title="Sex Distribution at Beginning of Study")
# Pie plot distribution of survivors' gender at end of trial
gender_survival_at_first = mouse_study_df.loc[(mouse_study_df["Timepoint"] == 45),:]
gender_survival_at_first["Sex"].value_counts().plot(kind="pie", colors=["blue", "red"], shadow=True, autopct="%1.1f%%",
title="Sex Distribution at End of Study")
# Histogram of final timepoints for each mouse
plt.hist(x=last_timepoints["Timepoint"], bins= [0,5,10,15,20,25,30,35,40,45,50])
plt.title("Distribution of Final Timepoints per Mouse")
plt.ylabel("Count of timepoint")
plt.xlabel("Timepoint")
plt.xticks([0,5,10,15,20,25,30,35,40,45])
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Checking the number of mice in the DataFrame.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
###Output
_____no_output_____
###Markdown
Bar Plots
###Code
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pyplot.
###Output
_____no_output_____
###Markdown
Pie Plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
combine_df
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
latest_timepoint = combine_df.groupby("Mouse ID").max()["Timepoint"].reset_index()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
new_merge_df = latest_timepoint[['Mouse ID', 'Timepoint']].merge(combine_df ,\
on=["Mouse ID","Timepoint"], how='left')
new_merge_df
# Put treatments into a list for for loop (and later for plot labels)
treatments = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
# Create empty list to fill with tumor vol data (for plotting)
treatment_list = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
for treatment in treatments:
df = new_merge_df[new_merge_df['Drug Regimen'] == treatment]['Tumor Volume (mm3)']
iqr = df.quantile(.75) - df.quantile(.25)
lower_bound = df.quantile(.25) - (1.5*iqr)
upper_bound = df.quantile(.75) + (1.5*iqr)
# add subset
treatment_list.append(df)
# Determine outliers using upper and lower bounds
print(f'{treatment} potential outliers: {df[(df<lower_bound) | (df>upper_bound)]}')
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
treatment_list
plt.boxplot(treatment_list,labels = treatments, flierprops={'markerfacecolor':'red','markersize':12})
plt.ylabel('Final Tumor Volume (mm3)')
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
capomulin_df = combine_df.loc[combine_df["Drug Regimen"]== "Capomulin"]
mouse = capomulin_df.loc[capomulin_df["Mouse ID"] == "b128"]
x = mouse["Timepoint"]
y = mouse["Tumor Volume (mm3)"]
plt.plot(x,y, color="pink")
plt.xlabel("Timepoint", labelpad=15)
plt.ylabel("Tumor Volume (mm3)", labelpad=15)
plt.title("Tumor Volume vs. Timepoint for Mouse b128 Treated with Capomulin", y=1.02, fontsize=22);
plt.show()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
cap = capomulin_df.groupby('Mouse ID').mean().reset_index()
cap.plot.scatter('Weight (g)','Tumor Volume (mm3)')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
val = round(st.pearsonr(cap['Weight (g)'],cap['Tumor Volume (mm3)'])[0],2)
print(f'The correlation between mouse wieght and the average tumor volume is {val}')
plt.scatter(cap['Weight (g)'], cap['Tumor Volume (mm3)'])
plt.ylabel('Average Tumor Volume (mm3)')
plt.xlabel('Weight (g)')
model = st.linregress(cap['Weight (g)'], cap['Tumor Volume (mm3)'])
y = cap['Weight (g)']*model[0]+model[1]
plt.plot(cap['Weight (g)'],y, color='red', linewidth=3)
plt.show()
###Output
The correlation between mouse wieght and the average tumor volume is 0.84
###Markdown
Observations and Insights 1. Except for Capomulin and Ramicane, most drug regimens failed to reduce the size of tumor after a certain period of time.2. Capomulin and Ramicane are the two most effective treatments on reducing the size of tumor. Specifically, the tumor can be reduced more than one quater of the orginal size under these two treatments.3. Weight of mouse has a strong effect on the average tumor size. As mouse weight increases, the average tumor size increases.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
print(list(mouse_metadata))
print(list(study_results))
# Combine the data into a single dataset
merge_df=pd.merge(mouse_metadata,study_results,on="Mouse ID", how="outer")
# Display the data table for preview
merge_df
# Checking the number of mice.
merge_df["Mouse ID"].nunique()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
merge_df.loc[merge_df.duplicated(subset=["Mouse ID", "Timepoint"], keep=False),["Mouse ID", "Timepoint"]]
# Optional: Get all the data for the duplicate mouse ID.
merge_df.loc[merge_df.duplicated(subset=["Mouse ID","Timepoint"], keep=False)]
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df=merge_df.drop_duplicates(subset=["Mouse ID","Timepoint"], keep='last')
clean_df.reset_index(drop=True)
# Checking the number of mice in the clean DataFrame.
clean_df["Mouse ID"].nunique()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
regimen_df=clean_df.groupby("Drug Regimen")
# This method is the most straighforward, creating multiple series and putting them all together at the end.
rmean=regimen_df["Tumor Volume (mm3)"].mean()
rmedian=regimen_df["Tumor Volume (mm3)"].median()
rvar=regimen_df["Tumor Volume (mm3)"].var()
rSEM=regimen_df["Tumor Volume (mm3)"].sem()
summary_df_1=pd.DataFrame({"mean":rmean,"median":rmedian,"variance":rvar,"SEM":rSEM})
summary_df_1
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
rmean=regimen_df["Tumor Volume (mm3)"].mean()
rmedian=regimen_df["Tumor Volume (mm3)"].median()
rvar=regimen_df["Tumor Volume (mm3)"].var()
rSEM=regimen_df["Tumor Volume (mm3)"].sem()
summary_df_1=pd.DataFrame({"mean":rmean,"median":rmedian,"variance":rvar,"SEM":rSEM})
summary_df_1
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
regimen_df["Mouse ID"].nunique().plot(kind="bar", figsize=(10,6))
plt.title("Total number of mice for each treatment regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Total number of mice")
plt.show()
plt.tight_layout()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
xaxis = range(len(regimen_df["Mouse ID"].nunique()))
tickLocations = [value for value in xaxis]
plt.figure(figsize=(10, 6))
plt.bar(xaxis, regimen_df["Mouse ID"].nunique(), color='blue', alpha=1)
plt.xticks(tickLocations, list(regimen_df["Mouse ID"].nunique().index), rotation="vertical")
plt.xlim(-0.75, len(xaxis) - 0.25)
plt.ylim(0, 26)
plt.title("Total number of mice for each treatment regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Total number of mice")
plt.tight_layout()
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
nsex = clean_df["Sex"].value_counts()
nsex.plot(kind="pie", autopct='%1.1f%%',figsize=(5,6))
plt.tight_layout()
plt.axis("equal")
plt.title("Distribution of female versus male mice")
plt.tight_layout()
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.figure(figsize=(5, 7))
plt.pie(nsex.values, labels=nsex.index.values, autopct="%1.1f%%")
# Create axes which are equal so we have a perfect circle
plt.axis("equal")
plt.title("Distribution of female versus male mice")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
select_df=clean_df.loc[(clean_df["Drug Regimen"] == "Capomulin") | (clean_df["Drug Regimen"] == "Ramicane") |
(clean_df["Drug Regimen"] == "Infubinol") | (clean_df["Drug Regimen"] == "Ceftamin")]
# Start by getting the last (greatest) timepoint for each mouse
select_last = select_df.groupby('Mouse ID').max()['Timepoint']
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
select_ser = pd.DataFrame(select_last)
select_merge = pd.merge(select_ser, clean_df, on=("Mouse ID","Timepoint"),how="left")
select_merge
# Put treatments into a list for for loop (and later for plot labels)
reglist=["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
# Create empty list to fill with tumor vol data (for plotting)
tumor_vol_list=[]
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for reg in reglist:
# Locate the rows which contain mice on each drug and get the tumor volumes
tumor_vol = select_merge.loc[select_merge["Drug Regimen"] == reg,"Tumor Volume (mm3)"]
tumor_vol.head()
tumor_vol_list.append(tumor_vol)
# add subset
# Determine outliers using upper and lower bounds
quartiles = tumor_vol.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"{reg} potential outliers: Lower bound:{round(lower_bound,2)}; Upper bound: {round(upper_bound,2)}.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig, axis = plt.subplots()
axis.set_title('Final tumors volume vs Drug Regimen')
axis.set_ylabel('Final Tumor Volume (mm3)')
axis.set_xlabel('Drug Regimen')
axis.boxplot(tumor_vol_list, labels=["Capomulin","Ramicane","Infubinol","Ceftamin",])
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
capo_df = select_df.loc[select_df["Mouse ID"] == "y793"]
capo_df
xaxis = capo_df["Timepoint"]
tumor_y793 = capo_df["Tumor Volume (mm3)"]
plt.plot(xaxis, tumor_y793, linewidth=2, markersize=12)
plt.title('Tumor volume for mouse y793 treated with Capomulin')
plt.xlabel('Timepoint')
plt.ylabel('Tumor Volume (mm3)')
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capo_avg = select_df.loc[select_df["Drug Regimen"] == "Capomulin"].groupby(['Mouse ID']).mean()
plt.scatter(capo_avg['Weight (g)'],capo_avg['Tumor Volume (mm3)'])
plt.title('Mouse weight versus average tumor volume for the Capomulin regimen')
plt.xlabel('Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
x_values = capo_avg['Weight (g)']
y_values = capo_avg['Tumor Volume (mm3)']
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(capo_avg['Weight (g)'],capo_avg['Tumor Volume (mm3)'])
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(20,40),fontsize=15,color="red")
plt.title('Mouse weight versus average tumor volume for the Capomulin regimen')
plt.xlabel('Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import scipy.stats as st
from scipy.stats import linregress
import sklearn.datasets as dta
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
mouse_metadata
study_results
# Combine the data into a single dataset
df = mouse_metadata.merge(study_results, left_on = 'Mouse ID', right_on = 'Mouse ID', how = 'inner').drop_duplicates()
# Display the data table for preview
df
# Checking the number of mice = 249
mice_num = df['Mouse ID'].nunique()
print(f"the number of mice is {mice_num}.")
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
dups_id_timepoint = df[['Mouse ID', 'Timepoint']]
dups_id_timepoint[dups_id_timepoint.duplicated()]
# Optional: Get all the data for the duplicate mouse ID.
dups = df[df.duplicated(['Mouse ID','Timepoint'])]
dups
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
cleaned_df = df[df['Mouse ID'] != 'g989']
cleaned_df
# Checking the number of mice in the clean DataFrame.
num_cleaned_mice = cleaned_df['Mouse ID'].nunique()
print(f"there are total {num_cleaned_mice} mice after cleaning the DataFrame")
###Output
there are total 248 mice after cleaning the DataFrame
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
#mean
tumor_mean = round(cleaned_df['Tumor Volume (mm3)'].mean(),2)
print(f"mean of the tumar volume is {tumor_mean}")
#median
tumor_median = round(cleaned_df['Tumor Volume (mm3)'].median(),2)
print(f"median of the tumar volume is {tumor_median}")
#variance
tumor_variance = round(cleaned_df['Tumor Volume (mm3)'].var(),2)
print(f"median of the tumar volume is {tumor_variance}")
#standard deviation
tumor_std = round(cleaned_df['Tumor Volume (mm3)'].std(),2)
print(f"std of the tumar volume is {tumor_std}")
#SEM
tumor_sem = round(cleaned_df['Tumor Volume (mm3)'].sem(),2)
print(f"sem of the tumar volume is {tumor_sem}")
#Summary statistics table
summary_table = pd.DataFrame({'mean': [tumor_mean],
'median':[tumor_median],
'variance':[tumor_variance],
'std':[tumor_std],
'sem':[tumor_sem]})
summary_table
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
cleaned_df.head()
drug_tumor_mean = cleaned_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
drug_tumor_median = cleaned_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()
drug_tumor_variance = cleaned_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()
drug_tumor_std = cleaned_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()
drug_tumor_sem = cleaned_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()
table1 = pd.DataFrame(drug_tumor_mean)
summary_table = table1.rename(columns={"Tumor Volume (mm3)": "Mean"})
summary_table["Median"] = drug_tumor_median
summary_table["Variance"] = drug_tumor_variance
summary_table["std"] = drug_tumor_std
summary_table["sem"] = drug_tumor_sem
summary_table
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
summary_table = cleaned_df.groupby(df['Drug Regimen'])['Tumor Volume (mm3)'].agg(["mean",
"median",
"var",
"std",
"sem"])
summary_table
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
plt.style.use('ggplot')
cleaned_df
###Output
_____no_output_____
###Markdown
Bar chart
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
df_bar = pd.DataFrame(cleaned_df['Drug Regimen'].value_counts())
df_bar.plot(figsize = (10, 5),
kind = 'bar',
title = 'The total number of measurements taken on each drug regimen',
xlabel = 'Drug Regimen',
ylabel = 'Total Measurements');
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
df_bar = pd.DataFrame(cleaned_df['Drug Regimen'].value_counts()).reset_index()
df_bar_x = list(df_bar['index'])
df_bar_y = list(df_bar['Drug Regimen'])
fig1, ax1 = plt.subplots(figsize=(10,5))
ax1.bar(df_bar_x, df_bar_y);
ax1.set(xlabel = "Drug Regimen",
ylabel = "Total Measurements",
title = "The total number of measurements taken on each drug regimen");
###Output
_____no_output_____
###Markdown
Pie Chart
###Code
# Dataset
pie_chart = cleaned_df[['Mouse ID','Sex']].drop_duplicates().groupby('Sex').count().rename(columns = {'Mouse ID' : 'mice gender distribution'})
pie_chart
# Generate a pie plot showing the distribution of female versus male mice using pandas
pie_chart.plot.pie(figsize=(5, 5),
y='mice gender distribution',
autopct="%1.1f%%",
startangle=140)
# Generate a pie plot showing the distribution of female versus male mice using pyplot
#Dataset
pie_chart = pie_chart.reset_index()
labels = list(pie_chart['Sex'])
values = list(pie_chart['mice gender distribution'])
explode = (0.1, 0)
#Pie Chart
fig2, ax2 = plt.subplots(figsize = (5,5));
ax2.pie(values,
explode=explode,
labels = labels,
autopct='%1.1f%%',
startangle=140,
shadow=True);
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
cleaned_df.head()
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
treatment = cleaned_df[cleaned_df['Drug Regimen'].isin(['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin'])]
# Start by getting the last (greatest) timepoint for each mouse
timepoint = pd.DataFrame(treatment.groupby(['Drug Regimen','Mouse ID'])['Timepoint'].max()).reset_index()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
merge = timepoint.merge(cleaned_df, left_on = ['Drug Regimen','Mouse ID','Timepoint'], right_on = ['Drug Regimen','Mouse ID','Timepoint'])
merge
# Put treatments into a list for for loop (and later for plot labels)
treatments = list(merge['Drug Regimen'].unique())
treatments
# Create empty list to fill with tumor vol data (for plotting)
tumor = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
for treatment in treatments:
tumor_vol = merge.loc[merge['Drug Regimen'] == treatment]['Tumor Volume (mm3)']
# add subset
tumor.append(tumor_vol)
# Determine outliers using upper and lower bounds
quartiles = tumor_vol.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of tumor_vol is: {lowerq}")
print(f"The upper quartile of tumor_vol is: {upperq}")
print(f"The interquartile range of tumor_vol is: {iqr}")
print(f"The the median of tumor_vol is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
print(" ")
plt.style.available
plt.style.use('seaborn-pastel')
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
labels= ['Capomulin', 'Ceftamin', 'Infubinol', 'Ramicane']
fig3, ax3 = plt.subplots(figsize=(9,5));
ax3.set_title('Tumor Volume by Regimens');
ax3.set_ylabel('Tumor Volume (mm3)');
ax3.boxplot(tumor,
patch_artist=True,
labels=labels);
plt.show();
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
cleaned_df
###Output
_____no_output_____
###Markdown
line chart
###Code
plt.style.use('ggplot')
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
#dataset
Capomulin = cleaned_df[cleaned_df['Drug Regimen'] == 'Capomulin'][['Mouse ID','Tumor Volume (mm3)','Timepoint']]
mouse = Capomulin[Capomulin['Mouse ID'] == 's185']
mouse
x_axis = list(mouse['Timepoint'])
y_axis = list(mouse['Tumor Volume (mm3)'])
#lince chart
fig4, ax4 = plt.subplots();
ax4.plot(x_axis,y_axis, color='red', marker='o');
ax4.set(title = "A mouse tumor volume vs. time point",
xlabel= "Timepoint",
ylabel = "Tumor Volume (mm3)",
ylim = (min(y_axis) - 3, max(y_axis) + 3)
);
###Output
_____no_output_____
###Markdown
Scatter Plot
###Code
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
# Dataset
avg_capo = cleaned_df.loc[cleaned_df['Drug Regimen'] == 'Capomulin'].groupby(['Mouse ID']).agg('mean').reset_index().rename(columns = {'Tumor Volume (mm3)': 'avg_tumor_vol'})
#axis
x_axis = list(avg_capo['Weight (g)'])
y_axis = list(avg_capo['avg_tumor_vol'])
# scatter plot
fig5, ax5 = plt.subplots(figsize=(10,7));
ax5.scatter(x_axis, y_axis, color='blue', marker="o");
ax5.set(title = "Capomulin average tumor volume vs. mouse weight",
xlabel = "Weight (g)",
ylabel = "Tumor Volume (mm3)");
plt.show();
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
#Dataset
avg_capo
#axis
x_axis = avg_capo['Weight (g)']
y_axis = avg_capo['avg_tumor_vol']
#correlation coefficient
print(f"The correlation coefficient between mouse weight and average tumor volume is {round(st.pearsonr(x_axis,y_axis)[0],2)}")
###Output
The correlation coefficient between mouse weight and average tumor volume is 0.84
###Markdown
linear regression model
###Code
#linear regression model
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_axis, y_axis)
regress_values = x_axis * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
# scatter plot
fig5, ax5 = plt.subplots(figsize = (10,7));
ax5.scatter(x_axis, y_axis, color='blue', marker="*");
ax5.plot(x_axis,regress_values,"r-");
ax5.annotate(line_eq,(22,40),fontsize=15,color="red");
ax5.set(title = "Capomulin average tumor volume vs. mouse weight",
xlabel = "Weight (g)",
ylabel = "Tumor Volume (mm3)");
print(f"The r-squared is: {rvalue**2}")
plt.show();
###Output
The r-squared is: 0.7088568047708717
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
mouse_metadata
study_results
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pandas.
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
mouse_metadata.head()
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview that Reformatting worked
# Generate the Plot (with Error Bars)
# Save the Figure
# Show the Figure
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
# Convert to DataFrame
# Preview DataFrame
# Minor Data Munging to Re-Format the Data Frames
# Preview the Data Frame
# Generate the Plot (Accounting for percentages)
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
study_results
# Combine the data into a single dataset
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as sts
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
merged_df=pd.merge(mouse_metadata, study_results, how='left', on='Mouse ID')
# Display the data table for preview
merged_df.head()
mouse_metadata.head()
study_results.head()
# Checking the number of mice.
len(merged_df['Mouse ID'].unique())
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_id=merged_df.loc[merged_df.duplicated(subset=['Mouse ID','Timepoint']),'Mouse ID'].unique()
duplicate_id
# Optional: Get all the data for the duplicate mouse ID.
duplicate_id_df = merged_df.loc[merged_df['Mouse ID']=='g989']
duplicate_id_df
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
cleaned_df = merged_df.loc[merged_df['Mouse ID']!='g989']
cleaned_df
# Checking the number of mice in the clean DataFrame.
len(cleaned_df['Mouse ID'].unique())
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
mean_mouse = cleaned_df.groupby('Drug Regimen').mean()['Tumor Volume (mm3)']
median_mouse = cleaned_df.groupby('Drug Regimen').median()['Tumor Volume (mm3)']
var_mouse = cleaned_df.groupby('Drug Regimen').var()['Tumor Volume (mm3)']
std_mouse = cleaned_df.groupby('Drug Regimen').std()['Tumor Volume (mm3)']
sem_mouse = cleaned_df.groupby('Drug Regimen').sem()['Tumor Volume (mm3)']
summary_df = pd.DataFrame({'Mean Tumor Volume':mean_mouse,
'Median Tumor Volume':median_mouse,
'Variance Tumor Volume':var_mouse
,'Standard Deviation of Tumor Volume':std_mouse,
'SEM of Tumor Volume':sem_mouse})
summary_df
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
#df.groupby('A').agg({'B': ['min', 'max'], 'C': 'sum'})
summary_df2 = cleaned_df.groupby('Drug Regimen').agg({'Tumor Volume (mm3)':['mean', 'median','var','std','sem']})
summary_df2
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
# .valuecounts on drug regimen =mouse_count
color_list = ["green", "red", "blue", "yellow", "purple", "orange", "coral", "black","brown", "gray"]
regimen_summary = cleaned_df['Drug Regimen'].value_counts()
regimen_summary.plot(kind='bar',figsize=(10,5),rot=0,color=color_list,alpha=.65)
# Set a Title for the chart
plt.title('Total Number of Measurements per Regimen')
plt.xlabel('Drug Regimen')
plt.ylabel('Number of Measurements')
plt.ylim(125,250)
plt.show()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
#regimen_summary = cleaned_df['Drug Regimen'].value_counts()
#regimen_summary
drug_id_time_df = cleaned_df[["Drug Regimen","Timepoint","Mouse ID"]]
x = drug_id_time_df['Drug Regimen'].unique().tolist()
y = drug_id_time_df['Drug Regimen'].value_counts().tolist()
plt.figure()
plt.bar(x,y,color=color_list, alpha=.8,width=.4)
plt.title('Total Mice Per Timepoint for Drug Regiment')
plt.xlabel('Drug Regimen')
plt.ylabel('Number of Mice')
plt.ylim(100, 250)
# Generate a pie plot showing the distribution of female versus male mice using pandas
M_vs_F = cleaned_df["Sex"].value_counts()
#print(M_vs_F)
gender = ["Male", "Female",]
explode = (0, .1)
M_vs_F.plot(kind="pie",autopct="%1.1f%%",startangle=140,colors = ['lightsalmon','darkturquoise'],explode = (0, .07),shadow=True)
plt.title('Distribution of Mouse Sexes')
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
M_vs_F = cleaned_df["Sex"].value_counts()
#print(M_vs_F)
# Labels for the sections of our pie chart
gender = ["Male", "Female",]
# The colors of each section of the pie chart
color = color_list
# Tells matplotlib to seperate the "Female" section from the others
explode = (0, .07)
# Creates the pie chart based upon the values above
# Automatically finds the percentages of each part of the pie chart
plt.pie(M_vs_F, colors=['orchid','paleturquoise'],autopct="%1.1f%%", shadow=True, startangle=140, labels=gender, explode=explode,)
plt.title('Distribution of Mouse Sexes')
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
drug_success = cleaned_df.groupby(["Mouse ID"]).max()
drug_success = drug_success.reset_index()
drug_success.head()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
merged_data = drug_success[['Mouse ID','Timepoint']].merge(cleaned_df,on=['Mouse ID','Timepoint'],how="left")
merged_data.head()
tumor_vol_list = []
treatment_list = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
for drug in treatment_list:
final_tumor_vol = merged_data.loc[merged_data["Drug Regimen"] == drug, 'Tumor Volume (mm3)']
tumor_vol_list.append(final_tumor_vol)
fig1, axl = plt.subplots()
# ax1.set_ylabel('Final Tumor Volume (mm3)')
# axl.boxplot(tumor_vol_list)
# plt.show()
axl.boxplot(tumor_vol_list, labels = treatment_list)
# plt.ylabel('Final Tumor Volume (mm3)')
plt.title("Drug Trial Results baased on Tumor Volume (mm3)")
plt.ylabel("Tumor Volume (mm3)")
plt.xlabel("Drug Adminstered")
plt.grid=(True)
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
capomulin_data = cleaned_df[cleaned_df["Drug Regimen"] == "Capomulin"]
capomulin_mouse_data = capomulin_data[capomulin_data["Mouse ID"] == "s185"]
x_line = capomulin_mouse_data["Timepoint"]
y_line = capomulin_mouse_data["Tumor Volume (mm3)"]
plt.plot(x_line, y_line)
plt.title("Treatment of Mouse 's185' on Capomulin")
plt.xlabel("Time (days)")
plt.ylabel("Tumor Volume (mm3)")
plt.grid(True)
plt.xlim(0,45.5)
plt.ylim(0,50)
plt.show()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
grouped_mouse = capomulin_data.groupby(["Mouse ID"])
grouped_weight = grouped_mouse["Weight (g)"].mean()
avg_tumor_size_bymouse = grouped_mouse["Tumor Volume (mm3)"].mean()
plt.scatter(x = grouped_weight, y = avg_tumor_size_bymouse)
plt.title("Average Tumor Size (mm3) vs. Weight of Mouse during Capomulin Drug Trial")
plt.xlabel("Weight of Mouse (g)")
plt.ylabel("Average Tumor Size (mm3)")
plt.grid(True)
plt.xlim(12,28)
plt.ylim(30,50)
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
corr_coeff = round(sts.pearsonr(grouped_weight,avg_tumor_size_bymouse)[0],2)
plt.scatter(x = grouped_weight, y = avg_tumor_size_bymouse)
plt.title("Average Tumor Size (mm3) vs. Weight of Mouse during Capomulin Drug Trial")
plt.xlabel("Weight of Mouse (g)")
plt.ylabel("Average Tumor Size (mm3)")
plt.grid(True)
plt.xlim(14,26)
plt.ylim(30,50)
linregress = sts.linregress(x = grouped_weight, y = avg_tumor_size_bymouse)
slope = linregress[0]
intercept = linregress[1]
bestfit = slope*grouped_weight + intercept
plt.plot(grouped_weight,bestfit, "--",color = "red")
plt.show()
print(f'The correlation coeffienct is {corr_coeff} for the Mouse Weight against the Tumor volume.')
###Output
_____no_output_____
###Markdown
Observations and Insights Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
#mouse_metadata.head()
#study_results.head()
# Combine the data into a single dataset
results_df = pd.merge(mouse_metadata, study_results, on = ["Mouse ID"])
results_df.head()
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
#grouped_results_df = results_df.groupby(["Drug Regimen"])
#grouped_results_df.head()
summary_stats = results_df.groupby('Drug Regimen').agg({'Tumor Volume (mm3)': ['mean', 'median', 'var','std', 'sem']})
summary_stats_df = pd.DataFrame(summary_stats)
summary_stats_df
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
summary_stats_df.plot(kind= "bar")
plt.title("Tumor Volume by Drug Regimen")
plt.xlabel("Regimen")
plt.ylabel("Volume")
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
summary_stats_df.plot.bar()
plt.title("Tumor Volume by Drug Regimen")
plt.xlabel("Regimen")
plt.ylabel("Volume")
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_stats = results_df.groupby('Sex').agg({'Tumor Volume (mm3)': ['mean', 'median', 'var','std', 'sem']})
gender_stats_df = pd.DataFrame(gender_stats)
gender_stats_df
gender_stats_df.plot(kind= "bar")
plt.title("Tumor Volume by Specimen Sex")
plt.xlabel("Sex")
plt.ylabel("Volume")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
gender_stats_df.plot.bar()
plt.title("Tumor Volume by Specimen Sex")
plt.xlabel("Sex")
plt.ylabel("Volume")
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens.
#Create Dataframe containing information related only to Campomulin, Ramicane, Infubinol, and Ceftamin
promising_treatment_df = results_df.loc[results_df["Drug Regimen"].isin(["Capomulin","Ramicane", "Infubinol", "Ceftamin"])]
#Calculate final tumor volume for each mouse ID for Campomulin, Ramicane, Infubinol, and Ceftamin
tum_vol_promising_treatment = promising_treatment_df.groupby(["Mouse ID","Drug Regimen"])["Tumor Volume (mm3)"].last()
#Create Dataframe with Final Tumor Volume by MouseID
final_tum_vol_mid_df = pd.DataFrame({"Tumor Volume (mm3)": tum_vol_promising_treatment})
final_tum_vol_mid_df
#Calculate the IQR and quantitatively determine if there are any potential outliers.
volume = final_tum_vol_mid_df["Tumor Volume (mm3)"]
quartiles = volume.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The the median of temperatures is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig1, ax1 = plt.subplots()
ax1.set_title('Final Tumor Volume')
ax1.set_ylabel('Tumor Volume')
ax1.boxplot(volume)
plt.show()
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
capomulin_treatment_df = results_df.loc[results_df["Drug Regimen"].isin(["Capomulin"])]
capomulin_treatment_df
df = capomulin_treatment_df.set_index("Mouse ID")
df.head()
s185_data = df.loc["s185"]
s185_data
s185_plot = plt.plot(s185_data["Timepoint"], s185_data["Tumor Volume (mm3)"], marker ='o', color='blue')
plt.title("Timepoint vs. Tumor Volume for Subject s185")
plt.xlabel("timepoint")
plt.ylabel("Volume")
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_treatment_df = results_df.loc[results_df["Drug Regimen"].isin(["Capomulin"])]
capomulin_treatment_df
df = capomulin_treatment_df.set_index("Mouse ID")
df
df[["Weight (g)", "Tumor Volume (mm3)"]]
avg_df = df.groupby(["Mouse ID"]).mean()
plt.scatter(avg_df["Weight (g)"], avg_df["Tumor Volume (mm3)"], marker ="o", facecolors="red", edgecolors="black")
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Checking the number of mice in the DataFrame.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straightforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function.
###Output
_____no_output_____
###Markdown
Bar Plots
###Code
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pyplot.
###Output
_____no_output_____
###Markdown
Pie Plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
tumor_response_df = pd.DataFrame(combined_df.groupby(['Drug', 'Timepoint']).mean())
# Convert to DataFrame
tumor_response_df = tumor_response_df.drop(columns=['Metastatic Sites'])
tumor_response_df = tumor_response_df.reset_index()
# Preview DataFrame
tumor_response_df.head()
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# combined_df
tumor_sem_df = pd.DataFrame(combined_df.groupby(['Drug', 'Timepoint']).sem())
# Convert to DataFrame
tumor_sem_df = tumor_sem_df.drop(columns=['Metastatic Sites'])
tumor_sem_df = tumor_sem_df.drop(columns=['Mouse ID'])
tumor_sem_df = tumor_sem_df.reset_index()
# Preview DataFrame
tumor_sem_df.head()
# Minor Data Munging to Re-Format the Data Frames
tumor_response_mung_df = tumor_response_df.pivot(index="Timepoint", columns="Drug", values="Tumor Volume (mm3)")
tumor_sem_mung_df = tumor_sem_df.pivot(index="Timepoint", columns="Drug", values="Tumor Volume (mm3)")
# Preview that Reformatting worked
tumor_response_mung_df.head()
tumor_sem_mung_df.head()
# Generate the Plot (with Error Bars)
tumor_response_condensed = tumor_response_mung_df.drop(columns=drug_ignore)
tumor_err_condensed = tumor_sem_mung_df.drop(columns=drug_ignore)
# tumor_err_plot.hlines(0, 0, 10, alpha=0.25)
# tumor_err_plot.grid(axis='y')
tumor_err_plot = tumor_response_condensed.plot(figsize=(12,8), yerr=tumor_err_condensed, color=['r','b','g','k'], legend=False)
tumor_err_plot.set_prop_cycle(None)
tumor_err_plot = tumor_response_condensed.plot(figsize=(12,8), style=['-or', '-^b', '-sg', '-dk'], ax=tumor_err_plot)
tumor_err_plot.set_xlabel("Time (Days)")
tumor_err_plot.set_ylabel("Tumor Volume (mm3)")
tumor_err_plot.set_title("Tumor Response to Treatment")
tumor_err_plot.set_ylim(33, 73)
tumor_err_plot.set_xlim(-3,48)
tumor_err_plot.grid('on', axis='y')
plt.show()
#
# Save the Figure
fig = tumor_err_plot.get_figure()
fig.savefig("Images/Tumor_Means.png")
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Tumor Response to Treatment](../Images/treatment.png) Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
meta_response_df = pd.DataFrame(combined_df.groupby(['Drug', 'Timepoint']).mean())
# Convert to DataFrame
meta_response_df = meta_response_df.drop(columns=['Tumor Volume (mm3)'])
meta_response_df = meta_response_df.reset_index()
# Preview DataFrame
meta_response_df.head()
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
meta_sem_df = pd.DataFrame(combined_df.groupby(['Drug', 'Timepoint']).sem())
# Convert to DataFrame
meta_sem_df = meta_sem_df.drop(columns=['Tumor Volume (mm3)'])
meta_sem_df = meta_sem_df.drop(columns=['Mouse ID'])
meta_sem_df = meta_sem_df.reset_index()
# Preview DataFrame
meta_sem_df.head()
# Minor Data Munging to Re-Format the Data Frames
meta_response_mung_df = meta_response_df.pivot(index="Timepoint", columns="Drug", values="Metastatic Sites")
meta_sem_mung_df = meta_sem_df.pivot(index="Timepoint", columns="Drug", values="Metastatic Sites")
# Preview that Reformatting worked
meta_response_mung_df.head()
meta_sem_mung_df.head()
# Generate the Plot (with Error Bars)
meta_response_condensed = meta_response_mung_df.drop(columns=drug_ignore)
meta_err_condensed = meta_sem_mung_df.drop(columns=drug_ignore)
meta_err_plot = meta_response_condensed.plot(figsize=(12,8), yerr=meta_err_condensed, color=['r','b','g','k'], legend=False)
meta_err_plot.set_prop_cycle(None)
meta_err_plot = meta_response_condensed.plot(figsize=(12,8), style=['-or', '-^b', '-sg', '-dk'], ax=meta_err_plot)
meta_err_plot.set_xlabel("Treatment Duration (Days)")
meta_err_plot.set_ylabel("Met. Sites")
meta_err_plot.set_title("Metastatic Spread During Treatment")
meta_err_plot.set_ylim(-.3, 3.8)
meta_err_plot.set_xlim(-3,48)
meta_err_plot.grid('on', axis='y')
plt.show()
#
# Save the Figure
fig = meta_err_plot.get_figure()
fig.savefig("Images/Tumor_Means.png")
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/spread.png) Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
survive_df = pd.DataFrame(combined_df.groupby(['Drug', 'Timepoint']).count())
survive_df = survive_df.drop(columns=['Metastatic Sites', 'Tumor Volume (mm3)'])
# Convert to DataFrame
survive_df = survive_df.rename(columns={
"Mouse ID" : "Mouse Count"
})
survive_df = survive_df.reset_index()
# Preview DataFrame
survive_df.head()
# Minor Data Munging to Re-Format the Data Frames
survive_mung_df = survive_df.pivot(index="Timepoint", columns="Drug", values="Mouse Count")
# Preview the Data Frame
survive_mung_df.head()
# Generate the Plot (Accounting for percentages)
sur_cap = 100*survive_mung_df['Capomulin']/survive_mung_df['Capomulin'].max()
sur_cet = 100*survive_mung_df['Ceftamin']/survive_mung_df['Ceftamin'].max()
sur_inf = 100*survive_mung_df['Infubinol']/survive_mung_df['Infubinol'].max()
sur_ket = 100*survive_mung_df['Ketapril']/survive_mung_df['Ketapril'].max()
sur_naf = 100*survive_mung_df['Naftisol']/survive_mung_df['Naftisol'].max()
sur_pla = 100*survive_mung_df['Placebo']/survive_mung_df['Placebo'].max()
sur_pro = 100*survive_mung_df['Propriva']/survive_mung_df['Propriva'].max()
sur_ram = 100*survive_mung_df['Ramicane']/survive_mung_df['Ramicane'].max()
sur_ste = 100*survive_mung_df['Stelasyn']/survive_mung_df['Stelasyn'].max()
sur_zon = 100*survive_mung_df['Zoniferol']/survive_mung_df['Zoniferol'].max()
# Save the Figure
x_axis = np.arange(len(sur_cap))
x_axis
plt.plot(x_axis, sur_cap, linewidth=2, marker="o", color="red", label = "Capomulin")
plt.plot(x_axis, sur_inf, linewidth=2, marker="^", color="blue", label = "Infubinol")
plt.plot(x_axis, sur_ket, linewidth=2, marker="s", color="green", label = "Ketapril")
plt.plot(x_axis, sur_pla, linewidth=2, marker="d", color="gray", label = "Placebo")
plt.xlabel("Time (Days)")
plt.ylabel("Survival Rate (%)")
plt.title("Chances of Survival During Treatment")
plt.savefig("Images/Survival Chances.png")
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
![Metastatic Spread During Treatment](../Images/survival.png) Summary Bar Graph
###Code
# Calculate the percent changes for each drug
tumorStart = tumor_response_mung_df.iloc[0,:].to_list()
tumorEnd = tumor_response_mung_df.iloc[9,:].to_list()
tumor_summary=pd.DataFrame({
"Drug" : drug_list,
"Tumor Size @ Start" : tumorStart,
"Tumor Size @ End" : tumorEnd
})
tumor_summary["Percent Change"] = 100*(tumor_summary["Tumor Size @ End"] - tumor_summary["Tumor Size @ Start"])/tumor_summary["Tumor Size @ Start"]
# Display the data to confirm
tumor_summary
# Store all Relevant Percent Changes into a Tuple
tumor_condensed_summ = tumor_summary#.drop(drug_ignore)
tumor_condensed_summ = tumor_condensed_summ.set_index('Drug')
# Splice the data between passing and failing drugs
failing_drugs = tumor_condensed_summ.loc[tumor_condensed_summ['Percent Change'] >= 0]
print(failing_drugs.head())
passing_drugs = tumor_condensed_summ.loc[tumor_condensed_summ['Percent Change'] < 0]
print(failing_drugs.head())
passing_drugs
tumor_bar = tumor_condensed_summ.drop(drug_ignore)
tumor_bar = tumor_bar.drop(columns=['Tumor Size @ Start', 'Tumor Size @ End'])
# tumor_bar = tumor_bar.sort_values(by=['Percent Change'])
# print(tumor_bar)
x2_axis = x_axis+1
drug_bar = ["Capomulin","Infubinol","Ketapril","Placebo"]
ax = tumor_bar.plot(kind='bar', color=['grrr'], width = 1, align='edge', rot=0, legend=False)
# Orient widths. Add labels, tick marks, etc.
ax.set_xticks(x2_axis)
ax.set_xticklabels(drug_use, ha='center')
ax.set_xlim(-.3,4.3)
ax.set_ylim(-28,68)
ax.grid(which='major')
# Use functions to label the percentages of changes
def percent(ax, labels):
for p in ax.patches:
percentage = '{:.1f}%'.format(p.get_height())
x = p.get_x() + p.get_width()/2 - 0.2
y = p.get_y() + p.get_height()/4
ax.annotate(percentage, (x, y), color='w')
# Call functions to implement the function calls
percent(ax, tumor_bar['Percent Change'].to_list())
# Save the Figure
ax.set_ylabel("% Tumor Volume Change")
ax.set_xlabel("")
ax.set_title("Tumor Change Over 45 Day Treatment")
plt.show()
plt.savefig("Images/Final Analysis.png")
# Show the Figure
#fig.show()
###Output
Tumor Size @ Start Tumor Size @ End Percent Change
Drug
Ceftamin 45.0 64.132421 42.516492
Infubinol 45.0 65.755562 46.123472
Ketapril 45.0 70.662958 57.028795
Naftisol 45.0 69.265506 53.923347
Placebo 45.0 68.084082 51.297960
Tumor Size @ Start Tumor Size @ End Percent Change
Drug
Ceftamin 45.0 64.132421 42.516492
Infubinol 45.0 65.755562 46.123472
Ketapril 45.0 70.662958 57.028795
Naftisol 45.0 69.265506 53.923347
Placebo 45.0 68.084082 51.297960
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Checking the number of mice in the DataFrame.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
###Output
_____no_output_____
###Markdown
Bar Plots
###Code
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pyplot.
###Output
_____no_output_____
###Markdown
Pie Plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
study_results_path = "data/Study_results.csv"
mouse_metadata_path = "data/Mouse_metadata.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combine_df = pd.merge(study_results, mouse_metadata, how="outer", on="Mouse ID")
# Display the data table for preview
combine_df.head()
total= combine_df['Mouse ID'].nunique()
total
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_df = combine_df.loc[combine_df.duplicated(['Mouse ID', 'Timepoint'])]
duplicate_df
# Optional: Get all the data for the duplicate mouse ID.
dup_id=combine_df['Mouse ID']=='g989'
combine_df[dup_id]
#Creating a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = combine_df.loc[combine_df['Mouse ID'] != 'g989']
clean_df.head()
# Checking the number of mice.
clean_df['Mouse ID'].nunique()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
grp = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)']
# This method is the most straighforward, creating multiple series and putting them all together at the end.
grp_df=pd.DataFrame({'mean':grp.mean(),'median':grp.median(),'variance':grp.var(),'std':grp.std(),'sem':grp.sem()})
grp_df
grp_df.plot(kind="bar", rot=45)
plt.title("Tumor Volume Summary")
plt.legend(loc='upper right', bbox_to_anchor=(-0.1, 1))
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
grp.agg(['mean','median','var','std','sem'])
# This method produces everything in a single groupby function
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas
#df.groupby('Date').sum().sort(axis=0).plot(kind='bar')
mouse_time = clean_df.groupby('Drug Regimen').count()['Tumor Volume (mm3)']
#res = mouse_time.apply(lambda x: x.order(ascending=False).head(3))
#mouse_time.sort_values(by='Tumor Volume (mm3)', ascending=False)
mouse_time.plot(kind="bar", rot=90,color=['blue', '#FF9900', 'green', 'red', '#CC99FF',
'#800000', '#FF99CC', '#808080', '#FFFF00', '#FF00FF'])
# plt.ylabel('Number of Data Points')
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandascolors = ['#FF99CC','blue']
gender_data = combine_df.groupby("Sex")["Mouse ID"].count()
counts = pd.DataFrame({"Sex":gender_data})
colors = ['#FF99CC','blue']
counts.plot.pie(y='Sex',autopct='%1.1f%%',colors=colors,legend=False)
# Generate a pie plot showing the distribution of female versus male mice using pyplot
sizes = counts['Sex']
plt.pie(sizes,colors=['#FF99CC','blue'], labels=['Female','Male'], autopct='%1.1f%%' )
plt.ylabel('Sex')
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
capomulin_df = combine_df.loc[combine_df['Drug Regimen'] == 'Capomulin']
forline_df = capomulin_df.loc[capomulin_df["Mouse ID"] == "l509",:]
forline_df.head()
x_axis_timepoint = forline_df['Timepoint']
tumor_size = forline_df['Tumor Volume (mm3)']
plt.plot(x_axis_timepoint, tumor_size)
plt.xlabel('Timepoint (Days)')
plt.ylabel('Tumor Volume (mm3)')
plt.title('Capomulin Treatment for Mouse id l509')
plt.savefig('Mouse_LineChart_Capomulin_i738')
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_average = capomulin_df.groupby(['Mouse ID']).mean()
plt.scatter(capomulin_average['Weight (g)'], capomulin_average['Tumor Volume (mm3)'])
plt.xlabel('Weight(g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.title('Capomulin Treatment Tumor and Weight Relation')
plt.tight_layout()
plt.savefig('Scatter_Plot_Weight_TumorVolume')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
combinedData = mouse_metadata.merge(study_results,on='Mouse ID')
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
summaryStats = combinedData.groupby(['Drug Regimen']).agg(['mean','median','var','std','sem'])['Tumor Volume (mm3)']
summaryStats
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
barDF = pd.DataFrame({'Drug Regimen': combinedData.groupby('Drug Regimen')['Tumor Volume (mm3)'].count().keys(),
'Count': combinedData.groupby('Drug Regimen')['Tumor Volume (mm3)'].count().values});
barDF.plot.bar(x='Drug Regimen',y='Count',figsize=(8,5),legend=False,alpha=0.5)
plt.xlabel('Drug Regimen')
plt.ylabel('Count')
plt.xticks(rotation=-60,ha='left')
plt.title('Number of Samples per Regimen')
plt.show()
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
plt.figure(figsize=(8,5))
plt.bar(np.arange(len(barDF['Drug Regimen'])),barDF['Count'],alpha=0.5)
plt.xticks(np.arange(len(barDF['Drug Regimen'])),barDF['Drug Regimen'].values,rotation=-60,ha='left')
plt.xlabel('Drug Regimen')
plt.ylabel('Count')
plt.title('Number of Samples per Regimen')
plt.show()
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Create a dataframe holding the count of unique Mouse IDs by Gender
genderCount = pd.DataFrame({'Count':mouse_metadata.groupby('Sex')['Mouse ID'].count().values},
index=mouse_metadata.groupby('Sex')['Mouse ID'].count().keys())
# Use that dataframe to produce the pie plot
genderCount.plot.pie(y='Count',figsize=(8, 5),autopct="%1.1f%%",legend=False,startangle=30,shadow=True)
plt.axis("equal")
plt.title("Distribution of Mice by Sex")
plt.ylabel("")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
labels = mouse_metadata.groupby('Sex')['Mouse ID'].count().keys()
plt.figure(figsize=(8,5))
plt.pie(genderCount['Count'],autopct="%1.1f%%",startangle=30,shadow=True,labels=labels)
plt.axis("equal")
plt.title("Distribution of Mice by Sex")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots Based on our summary statistics, the means of two drugs (Capomulin and Ramicane) appear significantly lower than those of the other drugs. So we will include these two, at least. To decide which other two drugs to include, I made a plot of the means and error bars (using the standard error of the mean). I also limited the y-axis to just focus on these drugs and make the error bars a little clearer.
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
x_axis = np.arange(0,len(summaryStats.index),1) + 1
means = summaryStats['mean'].values
se = summaryStats['sem'].values
fig,ax = plt.subplots()
ax.errorbar(x_axis, means, se, fmt="o")
ax.set_ylim(51,56)
plt.xticks(np.arange(len(barDF['Drug Regimen'])) + 1,barDF['Drug Regimen'].values,rotation=-60,ha='left')
plt.title('Means and SEMs for Each Drug Regimen')
plt.ylabel('Tumor Volume (mm3)')
plt.xlabel('Drug Regimen')
plt.show()
###Output
_____no_output_____
###Markdown
Based on the above plot, I would say the next two most promising drugs are Propriva and Ceftamin, although arguments could also be made for Infubinol and even Zoniferol. With that said, the readme.md file states that we should use Capomulin, Ramicane, Infubinol, and Ceftamin. Editorial: Based on the samples provided, it looks slightly more likely that Propriva regimens result in a smaller tumor volume, on average, than Ceftamin and Infubinol do. *shrug emoji*Four drugs for further investigation:- Capomulin- Ramicane- Infubinol- Ceftamin
###Code
# Grab only our Top 4 candidates at the last timestep (45)
mostPromising = combinedData[(combinedData['Drug Regimen'] == 'Capomulin') |
(combinedData['Drug Regimen'] == 'Ceftamin') |
(combinedData['Drug Regimen'] == 'Infubinol') |
(combinedData['Drug Regimen'] == 'Ramicane')].groupby('Mouse ID').tail(1)
regimen_dict = {}
# Loop through each of the regimens and print our quantitative statistics
for regimen in ['Capomulin','Ceftamin','Infubinol','Ramicane']:
df = mostPromising[mostPromising['Drug Regimen'] == regimen]
regimen_dict[regimen] = df
quartiles = df['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"For the drug regimen {regimen}...")
print(f"The lower quartile of final tumor volume is: {lowerq}")
print(f"The upper quartile of final tumor volume is: {upperq}")
print(f"The interquartile range of final tumor volume is: {iqr}")
print(f"The the median of final tumor volume is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print('')
print('****** Outlier Analysis ******')
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
outlier_volume = df.loc[(df['Tumor Volume (mm3)'] < lower_bound) | (df['Tumor Volume (mm3)'] > upper_bound)]
if outlier_volume.index.empty:
print("There are no likely outliers.")
else:
print("Potential outliers detected: ")
print(outlier_volume)
print('')
print('-------------------------------')
print('')
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
data = [regimen_dict['Capomulin']['Tumor Volume (mm3)'],regimen_dict['Infubinol']['Tumor Volume (mm3)'],
regimen_dict['Ceftamin']['Tumor Volume (mm3)'],regimen_dict['Ramicane']['Tumor Volume (mm3)']]
green_diamond = dict(markerfacecolor='g', marker='D')
fig, ax = plt.subplots(figsize=(10,6))
ax.set_title('Final Tumor Volume Distribution')
ax.boxplot(data,flierprops=green_diamond)
ax.set_xticks(np.arange(len(data)) + 1)
ax.set_xticklabels(['Capomulin','Infubinol','Ceftamin','Ramicane'])
ax.set_xlabel('Drug Regimen')
ax.set_ylabel('Tumor Volume (mm3)')
plt.show()
combinedData[combinedData['Drug Regimen'] == 'Capomulin'].sample(1).iloc[0,0]
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Get a random mouse
mouseID = combinedData[combinedData['Drug Regimen'] == 'Capomulin'].sample(1).iloc[0,0]
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
x = combinedData[combinedData['Mouse ID'] == mouseID]['Timepoint']
y = combinedData[combinedData['Mouse ID'] == mouseID]['Tumor Volume (mm3)']
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(x,y)
ax.set_title(f'Tumor Volume (mm3) Trend for Mouse {mouseID}')
ax.set_xlabel('Timepoint')
ax.set_ylabel('Tumor Volume (mm3)')
plt.grid(True)
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
x = combinedData[combinedData['Drug Regimen'] == 'Capomulin'].groupby('Mouse ID')['Weight (g)'].mean()
y = combinedData[combinedData['Drug Regimen'] == 'Capomulin'].groupby('Mouse ID')['Tumor Volume (mm3)'].mean()
fig, ax = plt.subplots(figsize=(8,5))
ax.scatter(x,y)
ax.set_title(f'Mouse Weight vs Mean Tumor Volume (mm3) for Capomulin Regimen')
ax.set_xlabel('Weight (g)')
ax.set_ylabel('Tumor Volume (mm3)')
plt.grid(True)
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
print(f'The calculated correlation coefficient is {st.pearsonr(x,y)[0]}')
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x, y)
regress_values = x * slope + intercept
fig, ax = plt.subplots(figsize=(8,5))
ax.scatter(x,y)
ax.set_title(f'Mouse Weight vs Mean Tumor Volume (mm3) for Capomulin Regimen')
ax.set_xlabel('Weight (g)')
ax.set_ylabel('Tumor Volume (mm3)')
plt.grid(True)
ax.plot(x,regress_values,"k-")
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.annotate(line_eq,(20,36),fontsize=15,color="black")
plt.show()
###Output
The calculated correlation coefficient is 0.8419363424694719
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
Mouse_data = pd.merge(mouse_metadata, study_results, on = "Mouse ID", how = "inner")
# Display the data table for preview
Mouse_data.head()
# Checking the number of mice.
Mouse_num = Mouse_data.groupby("Mouse ID")
Mice_num =len(Mouse_num.count())
Mice_num
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
Mouse_time = Mouse_data.groupby(["Mouse ID","Timepoint"])
Mice = Mouse_time.count()
Mice = Mice.sort_values("Drug Regimen")
Mice.tail()
# Optional: Get all the data for the duplicate mouse ID.
Mouse_data.loc[Mouse_data["Mouse ID"]== "g989"]
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
Data_mouse = Mouse_data
rmv = Data_mouse[Data_mouse["Mouse ID"]== "g989"].index.values
Data_mouse = Data_mouse.drop(rmv)
# Checking the number of mice in the clean DataFrame.
Mouse_num_new = Data_mouse.groupby("Mouse ID")
Mice_num_new =len(Mouse_num_new.count())
Mice_num_new
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
ss = Data_mouse.groupby("Drug Regimen")
mean_drug = pd.Series(ss["Tumor Volume (mm3)"].mean(), name = "Mean")
median_drug = pd.Series(ss["Tumor Volume (mm3)"].median(), name = "Median")
var_drug = pd.Series(ss["Tumor Volume (mm3)"].var(), name = "Variance")
std_drug = pd.Series(ss["Tumor Volume (mm3)"].std(), name = "STD Dev")
sem_drug = pd.Series(ss["Tumor Volume (mm3)"].sem(), name = "SEM")
sum_stat = pd.concat([mean_drug, median_drug, var_drug, std_drug, sem_drug], axis = 1)
sum_stat
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
ss2 = Data_mouse.groupby("Drug Regimen").agg(["mean","median","var","std", "sem"])
ss2["Tumor Volume (mm3)"]
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
drug_count = Data_mouse.groupby("Drug Regimen")
mcount = drug_count["Mouse ID"].nunique()
mcount.plot.bar(rot = 90, title = "Rats Per Drug", y = "Rat Count")
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
ax = mcount
ax = ax.reset_index()
xax = ax["Drug Regimen"]
yax = ax["Mouse ID"]
plt.bar(xax, yax, width = .5)
plt.xticks(rotation = 90)
plt.title("Rats per Drug")
plt.xlabel("Drug Regime")
plt.ylabel("Rat Count")
# Generate a pie plot showing the distribution of female versus male mice using pandas
sex_count = Data_mouse.groupby("Sex")
scount = sex_count["Mouse ID"].nunique()
scount
scount.plot.pie( title = "Sex Distribution")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
pc = scount
pc = pc.reset_index()
pxax = pc["Mouse ID"]
plt.pie(pxax, labels = ["Female","Male"])
plt.title("Sex Distribution")
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
Haf_dat = Data_mouse[Data_mouse["Drug Regimen"] == "Capomulin"]
Haf_dat = Haf_dat.append(Data_mouse[Data_mouse["Drug Regimen"] == "Ramicane"])
Haf_dat = Haf_dat.append(Data_mouse[Data_mouse["Drug Regimen"] == "Infubinol"])
Haf_dat = Haf_dat.append(Data_mouse[Data_mouse["Drug Regimen"] == "Ceftamin"])
Haf_dat
vol = Haf_dat.groupby(["Mouse ID","Timepoint"])
vol2 = vol["Tumor Volume (mm3)"].max()
vol3 = vol2.reset_index()
vol4 = vol3[vol3["Timepoint"] == 45]
vol4
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
Data_test = Data_mouse
Test_mouse = pd.merge(Data_test, vol4, on = ("Mouse ID","Timepoint"), how = "left", suffixes = ("","(end)"))
Test_mouse
# Put treatments into a list for for loop (and later for plot labels)
D_rug = ax["Drug Regimen"].values.tolist()
D_rug
# Create empty list to fill with tumor vol data (for plotting)
Tum_data = Test_mouse["Tumor Volume (mm3)"]
Tum_data
# Calculate the IQR and quantitatively determine if there are any potential outliers.
Quart = Tum_data.quantile([.25,.5,.75])
Quart
Top_bar = Quart[.75]
Low_bar = Quart[.25]
IQR = Top_bar - Low_bar
Top_out = Top_bar + 1.5*IQR
Low_out = Low_bar - 1.5*IQR
print(f"{Top_out} - {Low_out}")
Tum_data.sort_values()
#Many outliers
# Locate the rows which contain mice on each drug and get the tumor volumes
Box_cap = Test_mouse[Test_mouse["Drug Regimen"] == "Capomulin"]
Box_cap = Box_cap[["Drug Regimen","Tumor Volume (mm3)(end)"]]
Box_cap = Box_cap.dropna()
Box_cap
Box_ram = Test_mouse[Test_mouse["Drug Regimen"] == "Ramicane"]
Box_ram = Box_ram[["Drug Regimen","Tumor Volume (mm3)(end)"]]
Box_ram = Box_ram.dropna()
Box_ram
Box_inf = Test_mouse[Test_mouse["Drug Regimen"] == "Infubinol"]
Box_inf = Box_inf[["Drug Regimen","Tumor Volume (mm3)(end)"]]
Box_inf = Box_inf.dropna()
Box_inf
Box_cef = Test_mouse[Test_mouse["Drug Regimen"] == "Ceftamin"]
Box_cef = Box_cef[["Drug Regimen","Tumor Volume (mm3)(end)"]]
Box_cef = Box_cef.dropna()
Box_cef
# add subset
Box_dat = pd.merge(Box_cap, Box_ram, on = ("Drug Regimen", "Tumor Volume (mm3)(end)"), how = "outer")
Box_dat = pd.merge(Box_dat, Box_inf, on = ("Drug Regimen", "Tumor Volume (mm3)(end)"), how = "outer")
Box_dat = pd.merge(Box_dat, Box_cef, on = ("Drug Regimen", "Tumor Volume (mm3)(end)"), how = "outer")
Box_dat
# Determine outliers using upper and lower bounds
Qt = Box_dat["Tumor Volume (mm3)(end)"].quantile([.25,.5,.75])
q3 = Qt[.75]
q1 = Qt[.25]
up_b = q3 + 1.5*(q3-q1)
down_b = q1 - 1.5*(q3-q1)
print(f"{up_b} - {down_b}")
Box_dat.sort_values(by = "Tumor Volume (mm3)(end)")
#No Outliers
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig1, g1 = plt.subplots()
g1.set_ylabel("Tumor Volume (mm^3)(end)")
g1.boxplot([Box_cap["Tumor Volume (mm3)(end)"], Box_ram["Tumor Volume (mm3)(end)"], Box_inf["Tumor Volume (mm3)(end)"], Box_cef["Tumor Volume (mm3)(end)"]])
g1.set_xticklabels(["Capomulin", "Ramicane", "Infubinol", "Ceftamin"])
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
#s185
x_rat = Data_mouse[Data_mouse["Mouse ID"] == "s185"]
x_rat = x_rat[["Timepoint", "Tumor Volume (mm3)"]]
x_rat
plt.plot(x_rat["Timepoint"], x_rat["Tumor Volume (mm3)"])
plt.ylabel("Tumor size (mm^3)")
plt.xlabel("Timepoint")
plt.title("Capomulin effect on s185")
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capo = Data_mouse[Data_mouse["Drug Regimen"] == "Capomulin"]
capo1 = capo.groupby("Mouse ID")
capo2 = capo1.mean()
capo3 = capo2[["Weight (g)", "Tumor Volume (mm3)"]]
capo3
plt.scatter(capo3["Weight (g)"], capo3["Tumor Volume (mm3)"])
plt.title("Weight vs. Average Tumor Volume for Capomulin")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm^3)")
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
capo_corr = st.pearsonr(capo3["Weight (g)"], capo3["Tumor Volume (mm3)"])[0]
capo_corr
plt.scatter(capo3["Weight (g)"], capo3["Tumor Volume (mm3)"])
m, b = np.polyfit(capo3["Weight (g)"], capo3["Tumor Volume (mm3)"],1)
plt.plot(capo3["Weight (g)"], m*capo3["Weight (g)"] + b)
#Observations
#1. While drugs such as Capomulin and Ramiicane seemed to have moderately positive effects of tumor reduction,
# drugs such as Infubinol and Ceftamin were not only of little help, but had positive correlation with tumor size,
# making them worse than leaving the tumor alone alone (assuming it wasn't growing like that in a control group)
#2. Weight and Average Tumor size had a clear positive correlation, even though one wouldn't think the two had that
# level of relation, even if the tumor adds its weight to the total, which it doesn't, given that the rat weight
# was constant throughout the timepoints and tumor growth/reductions
#3. The excluded rat, g989, had two entries for many of the early timepoints, but had different tumor sizes for those
# entries. These could not have been due to human error in measuring, as they were too far apart. It is possible that
# the duplicate entries were actually those two different rats, one being misentered as the other. This would also
# explain how, while all Drug Regimens had 25 rats, only Propriva and one other, Stelasyn, had only 24 (after the copies
# of g989 were removed).
###Output
_____no_output_____
###Markdown
Observations and Insights Pymaceuticals Inc.--- AnalysisAfter the rigorous data analysis of the data provided by the pharmaceutical, the following conclusions were reached:* In general, the mice's weight is positively correlated with the tumor volume. This means that the more prominent (mm3) the tumor, the heavier the mouse(g).* In general, the mice's weight is positively correlated with the tumor volume. This means that the more prominent (mm3) the tumor, the heavier the mouse(g).* Looking at the drug regimen Capomulin, we can see its effects with the task of volume reduction of the tumor. As shown in the analysis, the tumor volume size decreases after the time points measured in mice under the treatment of Capomulin.* The data collected has a fair percentage of both 'sex' male and female mice.* The only drug regimen that has a potential outlier is Infubinol.* Based on the analysis, it's fair to say that both drug treatments of Capomunil and Ramicane are effective in the process of tumor volume reduction at the end of the treatment.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
%matplotlib inline
#I found that the magic key displayed on the previouse code line is great to include inside the code to allow it to run properly on jupyter lab. I got this form the following videos URL:https://www.linkedin.com/learning/pandas-essential-training/basic-plotting?u=38416468
import pandas as pd
import scipy.stats as st
from scipy.stats import linregress
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
#Check that the df are read and displayed with a print statment
#mouse_metadata.head()
#study_results.head()
# Combine the data into a single dataset
merge_df = pd.merge(mouse_metadata, study_results, on ="Mouse ID")
# Display the data table for preview
merge_df.head()
# Checking the number of mice.
mice_count = merge_df["Timepoint"].value_counts()[0]
print(f'There are a total of {mice_count} in this data set')
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicated_values = merge_df[merge_df.duplicated()]
#I understood the .duplicated() funciton thanks to this website: https://thispointer.com/pandas-find-duplicate-rows-in-a-dataframe-based-on-all-or-selected-columns-using-dataframe-duplicated-in-python/#:~:text=To%20find%20%26%20select%20the%20duplicate,argument%20is%20'first').
duplicated_values
# Optional: Get all the data for the duplicate mouse ID.
Duplicated_Mouse = merge_df.loc[merge_df['Mouse ID'] == 'g989']
Duplicated_Mouse
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
index_name= merge_df[merge_df['Mouse ID']=='g989'].index
#index_names.head()
df = merge_df.drop(index_name)
df.head()
# Checking the number of mice in the clean DataFrame.
clean_df_mice_count = df["Timepoint"].value_counts()[0]
print(f'There are a total of {clean_df_mice_count} in the clean dataframe, no duplicated values are in here!')
df.head()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
mean = df.groupby('Drug Regimen').mean()["Tumor Volume (mm3)"]
median = df.groupby('Drug Regimen').median() ["Tumor Volume (mm3)"]
variance = df.groupby('Drug Regimen').var() ["Tumor Volume (mm3)"]
standar_deviation = df.groupby('Drug Regimen').std() ["Tumor Volume (mm3)"]
sem = df.groupby('Drug Regimen').sem() ["Tumor Volume (mm3)"]
# Assemble the resulting series into a single summary dataframe.
summary_statistics_table = pd.DataFrame({"Mean Tumor Volume": mean,
"Median Tumor Volume": median,
"Tumor VolumeVariance": variance,
"Tumor Volume Standar Deviation": standar_deviation,
"Tumor Volume Std. Err.": sem})
summary_statistics_table
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
summary_df = df.groupby('Drug Regimen').agg({'Tumor Volume (mm3)':['mean', 'median','var', 'std', 'sem' ]})
summary_df
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
Drug_Regimen = df.groupby('Drug Regimen')
count_mice = Drug_Regimen['Mouse ID'].count()
#count_mice
graph_df = pd.DataFrame({'Mice Count': count_mice})
#graph_df
sorted_graph_df = graph_df.sort_values(by=["Mice Count"], ascending = False)
#sorted_graph_df
list_sorted_graph = sorted_graph_df.index.tolist()
x_axis = np.arange(len(df['Drug Regimen'].unique()))
tick_location = [value for value in x_axis]
plt.title("Number mice per Drug Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number mice")
plt.bar(x_axis, graph_df["Mice Count"].sort_values(ascending = False), color='b', alpha=0.5, align='center' )
plt.xticks(tick_locations, list_sorted_graph, rotation='vertical')
plt.figure(figsize=(20,3))
plt.show()
plt.tight_layout();
mice_group=df.groupby('Drug Regimen')
count_mice = mice_group['Mouse ID'].count()
count_mice
count_chart= sorted_graph_df.plot(kind='bar', title="Number mice per Drug Regimen")
count_chart.set_xlabel('Drug Regimen')
count_chart.set_ylabel('Number of mice');
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender= df['Sex'].value_counts()
gender
sizes = [958,922]
colors = ['Cyan','pink']
gender.plot(kind='pie',colors=["lightskyblue","lightcoral"], figsize=(5,5), autopct="%1.1f%%", title="Female VS Male");
# Generate a pie plot showing the distribution of female versus male mice using pyplot
sex=["Female",'Male']
plt.pie(gender, colors=["lightskyblue","lightcoral"], labels=sex, autopct="%1.1f%%", startangle=0)
plt.ylabel ('sex')
plt.title("Proportion of Males to Females")
#plt.legend()
plt.axis("equal")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots (Tutoring Sesion)
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
top4 = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
top4_df=df['Drug Regimen'].isin(top4) #tutoring with Trent Littel we went over this method and I was able to understand and move foward to get that last timepoint on the next few lines of code
filtered_df = df[top4_df]
timepoint_df = filtered_df.drop_duplicates(subset = 'Mouse ID', keep='last')
timepoint_df
#timepoint_df.drop(columns='Tumor volume at the last timepoint') #I had created this column at one point in the data frame and I had to drop it as it was of no help or need.
#df.drop(columns='Tumor volume at the last timepoint', inplace=True)
#had to use the inplace = tru to delete the column on the original df so it did not effect all of my other calculations
timepoint_df.tail()
# Put treatments into a list for for loop (and later for plot labels)
treatment_list = timepoint_df['Drug Regimen'].unique().tolist()
#treatment_list
# Create empty list to fill with tumor vol data (for plotting)
tumor_data = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
#Sarah in my study group helped me understand the use of a for loop in this section and what the loop was supposed to do in order to obtain potential outliers.
for i in treatment_list:
quartiles = timepoint_df.loc[(timepoint_df['Drug Regimen']==i)]['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
tumor_data= round(timepoint_df.loc[(timepoint_df['Drug Regimen']==i)]["Tumor Volume (mm3)"],6)
outliers = []
for v in tumor_data:
if (v < lower_bound) | (v> upper_bound):
outliers.append(v)
print(f"{i}'s potential outliers:{outliers}")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
red_dot=dict(markerfacecolor='r', marker='o')
box_plot =timepoint_df.boxplot(by="Drug Regimen",column =["Tumor Volume (mm3)"], flierprops=red_dot, figsize=(10,7))
box_plot.set_ylabel("Tumor Volume (mm3)");
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
capo_reg_df = df.loc[df['Drug Regimen'] =='Capomulin',:]
#capo_reg_df.head()
mouse_l509 = capo_reg_df.loc[capo_reg_df['Mouse ID'] =='l509',:]
x_axis= mouse_l509['Timepoint']
y_axis = mouse_l509['Tumor Volume (mm3)']
plt.title('Capomulin treatment of mouse l509')
plt.xlabel('Timepoint (days)')
plt.ylabel('Tumor Volume (mm3)')
plt.plot(x_axis, y_axis, linewidth = 2, markersize = 12);
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
average_cap_tumor_volume = capo_reg_df.groupby("Mouse ID").mean()
x_axis = average_cap_tumor_volume['Weight (g)']
y_axis = average_cap_tumor_volume['Tumor Volume (mm3)']
plt.xlabel ('Weight (g)')
plt.ylabel ('Average Tumor Volume (mm3)')
plt.title("Mouse weight versus average tumor volume for the Capomulin treatment regimen")
plt.scatter(x_axis,y_axis);
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_axis, y_axis)
regress_values = x_axis * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
#Set plot points and plot Scatter
plt.scatter(x_axis,y_axis, color='b')
plt.plot(x_axis,regress_values, color='r')
plt.annotate(line_eq,(20,10),fontsize=15)
plt.xlabel('Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.scatter(x_axis,y_axis)
###Output
_____no_output_____
###Markdown
Observations and Insights * Capomulin was effective in reducing the volume of the SCC tumor in a 45 day period when plotting data for one mouse [LineChart](Figures/LineChart.png)* Based on the average final SCC tumor volume for the four drug regimens of interest (i.e. Capomulin, Ramicane, Infubinol, and Ceftamin), Capomuline and Ramicane suggested are both the most effective.[Box and Whisker](Figures/BoxWhiskerPlot.png)* Mouse weight strongly correlates (R-squared is 0.84) with the average SCC tumor volume for mice on the the Capomulin treatment. Therefore, the mouse weight needs to be another controlled variable to prevent skewing the results when comparing treatments.[Regression](Figures/ScatterWeightTumorVolRegression.png)
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_data = pd.merge(mouse_metadata,study_results,how='outer',on='Mouse ID')
# Display the data table for preview
combined_data
# Checking the number of mice.
total_mice = mouse_metadata['Mouse ID'].count()
total_mice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
dup_data = combined_data.loc[combined_data.duplicated(subset=["Mouse ID","Timepoint"])]
# Optional: Get all the data for the duplicate mouse ID.
dup_mouse = combined_data.loc[combined_data["Mouse ID"]=="g989"]
dup_mouse
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_data = combined_data.drop(index=dup_mouse.index)
# Checking the number of mice in the clean DataFrame.
total_mice = len(clean_data['Mouse ID'].unique())
total_mice
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Initialize lists
regimens = clean_data["Drug Regimen"].unique()
mean_vol = []
median_vol = []
var_vol = []
std_vol = []
sems = []
# Iterate through the different treatments to calculate the summary statistics
for regimen in regimens:
df = clean_data[clean_data["Drug Regimen"]==regimen]
mean_vol.append(df["Tumor Volume (mm3)"].mean())
median_vol.append(df["Tumor Volume (mm3)"].median())
var_vol.append(np.var(df["Tumor Volume (mm3)"],ddof=0))
std_vol.append(np.std(df["Tumor Volume (mm3)"],ddof=0))
sems.append(st.sem(df["Tumor Volume (mm3)"]))
# Create a dataframe for the summary statistics
summ_stats_df = pd.DataFrame({
"Drug Regimen":regimens,
"Mean Volume": mean_vol,
"Median Volume": median_vol,
"Variance": var_vol,
"Standard Deviation": std_vol,
"Standard Error": sems
})
summ_stats_df
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
regimen_df = clean_data.groupby(['Drug Regimen'])
# Use the grouped object to calculate the summary statistics
mean_vol = regimen_df["Tumor Volume (mm3)"].mean()
median_vol = regimen_df["Tumor Volume (mm3)"].median()
var_vol = regimen_df["Tumor Volume (mm3)"].var(ddof=0)
std_vol = regimen_df["Tumor Volume (mm3)"].std(ddof=0)
sems = regimen_df["Tumor Volume (mm3)"].sem()
# Create a dataframe for the summary statistics
summ_stats_df = pd.DataFrame({
"Mean Volume": mean_vol,
"Median Volume": median_vol,
"Variance": var_vol,
"Standard Deviation": std_vol,
"Standard Error": sems
})
summ_stats_df
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
# Create a dataframe indexed by drug treatment to use pandas bar plotting
regimen_count = regimen_df['Mouse ID'].count()
bar_data_df = pd.DataFrame({
"Count":regimen_count
})
bar_data_df.plot(kind='bar',figsize=(10,3),rot=45,width=0.5)
plt.title("Total number of mice for each treatment")
# Save the figure
plt.savefig("Figures/PandaBarChart.png")
plt.show()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
plt.figure(figsize=(10,3))
plt.bar(regimens,regimen_count,align='center',width=0.5)
plt.xticks(rotation=45)
plt.xlim(-0.75,len(regimens)-0.25)
plt.ylim(0, max(regimen_count)*1.05)
# Set title and labels
plt.title("Total number of mice for each treatment")
plt.xlabel("Drug Regimen")
#Save figure
plt.savefig("Figures/PyplotBarChart.png")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Get count of female and male mice
gender_data = clean_data.groupby('Sex')
gender_count = gender_data['Sex'].count()
# Create dataframe to count the distribution of female and male mice
gender_count_df = pd.DataFrame(gender_count)
gender_count_df.plot(kind='pie', y="Sex", title= "Male versus Female Mice Distribution", autopct='%1.1f%%', startangle= 110,shadow=True, fontsize=14,legend =False)
plt.axis("equal")
# Save figure
plt.savefig("Figures/PandaPieChart.png")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
# Get the count for male and female mice into an array
genders = gender_count_df.index.values
plt.pie(gender_count,labels=genders, autopct="%1.1f%%", shadow=True, startangle=110)
plt.rcParams['font.size'] = 14
plt.title("Male versus Female Mice Distribution")
plt.ylabel("Sex")
plt.axis("equal")
# Save figure
plt.savefig("figures/PyplotPieChart.png")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
mouseid_df = clean_data.groupby(["Mouse ID"])
tumor_data = mouseid_df.last()
tumor_data
# Put treatments into a list for for loop (and later for plot labels)
treatments = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
# Create empty list to fill with tumor vol data (for plotting)
max_tumor_vol = []
for treatment in treatments:
temp_df = tumor_data[tumor_data["Drug Regimen"]==treatment]["Tumor Volume (mm3)"]
# Locate the rows which contain mice on each drug and get the tumor volumes
max_tumor_vol.append(list(temp_df))
# Calculate the quartiles
quartiles = temp_df.quantile([0.25,0.5,0.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
# Calculate the IQR and quantitatively determine if there are any potential outliers.
iqr = upperq - lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
# Determine outliers using upper and lower bounds
print(f"{treatment} potential outliers: {temp_df[(temp_df < lower_bound) | (temp_df > upper_bound)]}")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig1, ax1 = plt.subplots()
ax1.set_title('Final Tumor Volume by Drug Regimen')
ax1.set_ylabel('Tumor Volume (mm3)')
ax1.set_xlabel('Drug Regimen')
ax1.boxplot(max_tumor_vol)
plt.xticks([1,2,3,4],treatments)
# Save the figure
plt.savefig("Figures/BoxWhiskerPlot.png")
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Filter the original dataframe to only include Campomuline data
capomulin_data = clean_data[clean_data['Drug Regimen'] == 'Capomulin']
# Create capomuline dataframe that only has data for Mouse ID s185
capomulin_data_1 = capomulin_data[capomulin_data['Mouse ID']=="s185"]
# List comprehension to define the x and y axes
timepoints = [value for value in capomulin_data_1['Timepoint']]
tumor_vol = [value for value in capomulin_data_1['Tumor Volume (mm3)']]
plt.plot(timepoints,tumor_vol,marker='o',label='s185')
plt.legend(loc="best")
plt.title("Timepoints vs Tumor Volume")
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.grid(True)
plt.savefig("Figures/LineChart.png")
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# Group by Mouse ID to collect weight and average tumor volume for each mouse on Capomulin
capomulin_mouse_id = capomulin_data.groupby(['Mouse ID'])
mouse_weight = capomulin_mouse_id["Weight (g)"].mean()
tumor_vol = round(capomulin_mouse_id["Tumor Volume (mm3)"].mean(),3)
plt.scatter(mouse_weight,
tumor_vol,
marker='o',
facecolors='r',
edgecolors='black')
plt.title("Mouse Weight vs. Avg. Tumor Volume")
plt.xlabel("Mouse weight (g)")
plt.ylabel("Tumor Volume (mm3)")
# Save the figure
plt.savefig("Figures/ScatterWeightTumorVol.png")
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
correlation = st.pearsonr(mouse_weight, tumor_vol)
print(f"The correlation between both factors is {round(correlation[0],2)}")
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(mouse_weight,tumor_vol)
regress_values = mouse_weight * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(mouse_weight, tumor_vol)
plt.plot(mouse_weight,regress_values,"r-")
plt.annotate(line_eq,(17,37),fontsize=15,color="black")
plt.title("Mouse Weight vs Avg. Tumor Volume")
plt.xlabel("Mouse Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.grid(True)
# Save the figure
plt.savefig("Figures/ScatterWeightTumorVolRegression.png")
plt.show()
###Output
The correlation between both factors is 0.84
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
mouse_data = pd.merge(mouse_metadata, study_results, how = "left", on = "Mouse ID")
mouse_data.sort_values("Mouse ID")
# Checking the number of mice in the DataFrame.
mice_amount = len(mouse_data["Mouse ID"].unique())
mice_amount
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint. (what does this mean?)
duplicate = mouse_data.duplicated(subset = ["Mouse ID", "Timepoint"])
duplicate_id = mouse_data.loc[duplicate, "Mouse ID"].unique()
dup_id = duplicate_id[0]
dup_id
# Optional: Get all the data for the duplicate mouse ID.
duplicated_info = mouse_data.loc[mouse_data["Mouse ID"]==dup_id,:]
duplicated_info
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_mouse = mouse_data.loc[mouse_data["Mouse ID"]!= dup_id,:]
clean_mouse
# Checking the number of mice in the clean DataFrame. (only unique())
clean_mouse_amount = len(clean_mouse["Mouse ID"].unique())
clean_mouse_amount
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straightforward, creating multiple series and putting them all together at the end.
tumor_mean = clean_mouse.groupby("Drug Regimen")["Tumor Volume (mm3)"].mean()
tumor_median = clean_mouse.groupby("Drug Regimen")["Tumor Volume (mm3)"].median()
tumor_variance = clean_mouse.groupby("Drug Regimen")["Tumor Volume (mm3)"].var()
tumor_sd = clean_mouse.groupby("Drug Regimen")["Tumor Volume (mm3)"].std()
tumor_sem = clean_mouse.groupby("Drug Regimen")["Tumor Volume (mm3)"].sem()
drug_regimen_straight = pd.DataFrame({"Mean": tumor_mean, "Median": tumor_median, "Variance": tumor_variance, "STD": tumor_sd, "SEM": tumor_sem})
drug_regimen_straight
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function.
drug_regimen_group = clean_mouse.groupby(["Drug Regimen"])["Tumor Volume (mm3)"].agg(["mean", "median", "var", "std", "sem"])
drug_regimen_group
###Output
_____no_output_____
###Markdown
Bar Plots
###Code
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas. (which dataset should I be using? clean_mouse or mouse_data?)
mice_pt_bar = clean_mouse.groupby(["Drug Regimen", "Timepoint"])["Mouse ID"].count()
mice_bar_df = pd.DataFrame({"Mouse per Regimen": mice_pt_bar})
mice_bar_df = mice_bar_df.unstack(level = 0)
mouse_bar = mice_bar_df.plot(
kind = "bar", title = "Amount of Mouse Tested vs. Timepoint Per Drug Regimen", figsize = (20,10)
)
mouse_bar.set_ylabel("Number of Mouse")
plt.show()
# Shows the ylim is accurate; the graph seems like the the top has been cut for a couple of drug regimens at timepoint 0.
mice_bar_df[('Mouse per Regimen', 'Capomulin')].max()
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pyplot.
column_names = mice_bar_df.columns
n=0
for columns in column_names:
mice_bar_plt = plt.bar(
x = mice_bar_df[(columns)].index.values-1+n, height = mice_bar_df[columns], width = 0.25,
align = "center"
)
n+=0.25
plt.legend(title = "Drug Regimen", labels = column_names)
plt.gcf().set_size_inches(20,10)
plt.title("Amount of Mouse Tested vs. Timepoint Per Drug Regimen")
plt.xlabel("Timepoint")
plt.ylabel("Number of Mouse")
plt.show()
###Output
_____no_output_____
###Markdown
Pie Plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_pd = clean_mouse.drop_duplicates(subset="Mouse ID", keep='first')["Sex"].value_counts()
labels = gender_pd.index.values
gender_pie_pd = gender_pd.plot(kind = "pie", labels = labels, explode = [0.2,0], autopct = "%1.1f%%", shadow = True, startangle = 45)
gender_pie_pd.set_title("Sex Distrubution for Mouse")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
gender_pie_plt = plt.pie(gender_pd, labels = labels, explode = [0.2,0], autopct = "%1.1f%%", shadow = True, startangle = 45)
plt.title("Sex Distribution for Mouse")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens.
final_tumor_df_1 = clean_mouse.sort_values(by=["Timepoint"], ascending = False)
# merge_final_tumor = final_tumor_df[["Mouse ID","Timepoint"]].merge(clean_mouse, on=["Mouse ID","Timepoint"], how='left')
final_tumor_df_2 = final_tumor_df_1.drop_duplicates(subset="Mouse ID", keep='first')
final_tumor_df_3 = final_tumor_df_2.sort_values(by=["Tumor Volume (mm3)"])
final_tumor_df_4 = final_tumor_df_3.drop_duplicates(subset="Drug Regimen", keep='first')
drug_index = final_tumor_df_4["Drug Regimen"].tolist()
drug_index
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Create a function and a for loop to calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
promising_df = final_tumor_df_2.loc[
(final_tumor_df_2["Drug Regimen"]== "Ramicane") | (final_tumor_df_2["Drug Regimen"]== "Capomulin") |
(final_tumor_df_2["Drug Regimen"]== "Infubinol") | (final_tumor_df_2["Drug Regimen"]== "Ceftamin"),:
]
for n in range(0,4):
df_n = promising_df.loc[(promising_df["Drug Regimen"]== drug_index[n]),:].rename(columns={"Tumor Volume (mm3)": drug_index[n]})
tum_n = df_n[drug_index[n]]
quartile_n = tum_n.quantile(q=[0.25, 0.5, 0.75])
lowerq_n = quartile_n[0.25]
upperq_n = quartile_n[0.75]
iqr_n = upperq_n - lowerq_n
lower_bound_n = lowerq_n - 1.5 * iqr_n
upper_bound_n = upperq_n + 1.5 * iqr_n
outliers_n = df_n.loc[(df_n[drug_index[n]]<lower_bound_n)|(df_n[drug_index[n]]>upper_bound_n),:]
tum_n_df = pd.DataFrame(tum_n)
plt.boxplot(tum_n, positions = [n], labels = [drug_index[n]])
plt.gcf().set_size_inches(10,5)
plt.ylabel("Tumor Volume (mm3)")
plt.title("Final Tumor Volumes for Four of the Most Promising Treatment Regimens")
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
mouse_u364_df = clean_mouse.loc[clean_mouse["Mouse ID"]=="u364", :]
plt.plot(mouse_u364_df["Timepoint"], mouse_u364_df["Tumor Volume (mm3)"])
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Timepoint vs Tumor Volume for u364")
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_scatter_df = clean_mouse.loc[clean_mouse["Drug Regimen"]=="Capomulin",:]
avg_tumor = capomulin_scatter_df.groupby(["Mouse ID"]).mean()
plt.scatter(avg_tumor["Weight (g)"], avg_tumor["Tumor Volume (mm3)"])
plt.xlabel("Weight (g)")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Weight vs. Tumor Volume for Capomulin Regimen")
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(avg_tumor["Weight (g)"], avg_tumor["Tumor Volume (mm3)"])
line_eq = f"y = {round(slope,2)} * x + {round(intercept,2)}"
y = slope * avg_tumor["Weight (g)"] + intercept
plt.scatter(avg_tumor["Weight (g)"], avg_tumor["Tumor Volume (mm3)"])
plt.xlabel("Weight (g)")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Weight vs. Tumor Volume for Capomulin Regimen")
plt.plot(avg_tumor["Weight (g)"], y, color = "red")
plt.annotate(line_eq, (20,36), color = "red", fontsize = 16)
plt.show()
avg_tumor_correlation = st.pearsonr(avg_tumor["Weight (g)"], avg_tumor["Tumor Volume (mm3)"])
print(avg_tumor_correlation[0])
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
mouse_data_combined_df = pd.merge(mouse_metadata, study_results, how="left", on=["Mouse ID"])
# Display the data table for preview
mouse_data_combined_df.head(10)
# Checking the number of mice.
mouse_count = len(mouse_data_combined_df["Mouse ID"].unique())
print(mouse_count)
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# boolean = mouse_data_combined_df.duplicated(subset=["Mouse ID"]).any()
# print(boolean, end='\n\n')
# mouse_data_combined_df.drop_duplicates(subset=["Mouse ID"], inplace=True)
# print(mouse_data_combined_df)
duplicate_bool = mouse_data_combined_df.duplicated(subset=["Mouse ID"], keep=False)
duplicate = mouse_data_combined_df.loc[duplicate_bool == True]
print(duplicate)
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import sem
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
study_data_complete = pd.merge(study_results, mouse_metadata, how="left", on="Mouse ID")
# Display the data table for preview
study_data_complete.head()
# Checking the number of mice.
len(study_data_complete["Mouse ID"].unique())
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mouse_ids = study_data_complete.loc[study_data_complete.duplicated(subset=['Mouse ID', 'Timepoint']),'Mouse ID'].unique()
duplicate_mouse_ids
# Optional: Get all the data for the duplicate mouse ID.
duplicate_mouse_data = study_data_complete.loc[study_data_complete["Mouse ID"] == "g989"]
duplicate_mouse_data
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_study_data = study_data_complete[study_data_complete['Mouse ID'].isin(duplicate_mouse_ids)==False]
clean_study_data.head()
# Checking the number of mice in the clean DataFrame.
len(clean_study_data["Mouse ID"].unique())
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
mean = np.mean(clean_study_data["Tumor Volume (mm3)"])
median = np.median(clean_study_data["Tumor Volume (mm3)"])
variance = np.var(clean_study_data["Tumor Volume (mm3)"], ddof = 0)
sd = np.std(clean_study_data["Tumor Volume (mm3)"], ddof = 0)
sample_volume = clean_study_data.sample(75)
volume = sem(sample_volume["Tumor Volume (mm3)"])
summary_statistics = pd.DataFrame({"Mean":[mean],
"Median":[median],
"Variance":[variance],
"Standard Deviation":[sd],
"SEM":[volume],
})
summary_statistics.head()
# This method is the most straightforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
regimen = clean_study_data.groupby('Drug Regimen')
regimen_mean = regimen.mean()
regimen_median = regimen.median()
regimen_variance = regimen.var()
regimen_sd = regimen.std()
regimen_sem = regimen.sem()
summary_statistics2 = pd.DataFrame({"Mean": regimen_mean["Tumor Volume (mm3)"],
"Median": regimen_median["Tumor Volume (mm3)"],
"Variance": regimen_variance["Tumor Volume (mm3)"],
"Standard Deviation": regimen_sd["Tumor Volume (mm3)"],
"SEM": regimen_sem["Tumor Volume (mm3)"]
})
summary_statistics2
# This method produces everything in a single groupby function.
###Output
_____no_output_____
###Markdown
Bar Plots
###Code
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
mice_df = clean_study_data.groupby("Drug Regimen")
var = mice_df['Mouse ID'].count()
var.plot(kind = 'bar',color ='r',title = "Total Mice per Treatment", alpha = .75, edgecolor = 'k')
plt.ylabel('Number of Mice')
plt.show()
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pyplot.
plt.bar(var.index,var,color='r',alpha=.75,edgecolor='k')
plt.xticks(rotation=90)
plt.ylabel('Number of Mice')
plt.xlabel('Regimen')
plt.show()
###Output
_____no_output_____
###Markdown
Pie Plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender = mouse_metadata.loc[mouse_metadata['Mouse ID'] != 'g989']
gender_plot = gender['Sex'].value_counts()
gender_plot.plot(kind='pie', shadow = True, autopct = '%1.2f%%')
plt.title('Number of Mice by Gender')
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
labels = gender_plot.index
sizes = gender_plot
chart = plt.pie(sizes,autopct='%1.2f%%',labels=labels, shadow=True)
plt.ylabel('Sex')
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens.
treatment = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
#start by getting the last (greatest) timepoint for each mouse
timepoint_df = clean_study_data[['Mouse ID', 'Timepoint', 'Drug Regimen']]
filtered_df=timepoint_df[timepoint_df['Drug Regimen'].isin(treatment)]
grouped_df = filtered_df.groupby('Mouse ID')['Timepoint'].max()
# merge this group df with the original dataframe to get the tumor volume at the last timepoint
merged_df = pd.merge(grouped_df,clean_study_data,on=['Mouse ID','Timepoint'],how = 'left')
merged_df.head()
#Put treatments into a list for a for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
#tumor_vol_list = []
for drug in treatment:
quartiles = merged_df[drug].quantile([.25,.5,.75]).round(2)
lowerq = quartiles[.25].round(2)
upperq = quartiles[.75].round(2)
iqr = round(upperq-lowerq,2)
lower_bound = round(lowerq - (1.5*iqr),2)
upper_bound = round(upperq+(1.5*iqr),2)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volume
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
#capomulin df
capomulin_df = clean_study_data.loc[clean_study_data['Drug Regimen']=='Capomulin']
print(len(capomulin_df['Mouse ID'].unique()))
capomulin_df.head()
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
capomulin_mouse = clean_study_data.loc[clean_study_data['Mouse ID']=='u364']
x_axis=capomulin_mouse['Timepoint']
y_axis=capomulin_mouse['Tumor Volume (mm3)']
plt.ylabel('Tumor Volume')
plt.xlabel('Timepoint')
plt.title('Timepoint vs. Tumor Volume')
plt.plot(x_axis,y_axis)
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_mouse = clean_study_data.loc[clean_study_data['Drug Regimen']=='Capomulin']
capomulin_df = capomulin_mouse.groupby('Weight (g)')
mean_tumor= capomulin_df['Tumor Volume (mm3)'].mean()
weight_tumor=pd.DataFrame(mean_tumor).reset_index()
weight_tumor.plot(kind='scatter',x='Weight (g)',y = 'Tumor Volume (mm3)')
plt.title('Weight (g) vs. Tumor Volume (mm3)')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
var1 = weight_tumor['Weight (g)']
var2 = weight_tumor['Tumor Volume (mm3)']
corr = st.pearsonr(var1,var2)
print(f"The correlation coefficient of weight and average tumor volume is {corr[0]}")
(slope, intercept, rvalue, pvalue, stderr) = linregress(var1,var2)
regress_vals = var1*slope+intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(var1,var2)
plt.plot(var1, regress_vals,'r-')
plt.annotate(line_eq,(20,37), fontsize= 15,color ='r')
###Output
_____no_output_____
###Markdown
Observations and Insights Look across all previously generated figures and tables and write at least three observations or inferences that can be made from the data. Include these observations at the top of notebook. 1) We can observe from the summary statistics table that among all the drugs of the Drug Regimen there are two particular drugs that outstands from the rest: Capomulin and Ramicane. These two drugs report Tumor Volume (mm3) means of: 40.675741 and 40.216745, while the medians are also quite similar: 41.557809 and 40.673236, so we could suggest that both drugs almost have a normal distribution, meanwhile the spread of the data of these two drugs are also quite small: 4.994774 and 4.846308 where compared to the spread of the other drugs are almost two times smaller. With all this information we can quickly deduct that the best results of all this investigation is going to be among all the records that used these two drugs. 2) Although the two drugs: Capomulin and Ramicane apparently are more effective in reducing the tumor volume, it is also important to notice that these two drugs have at least 75 more measurements than the other drugs which are part of the Drug Regimen. This could be a very crucial factor to evaluate the effectiveness of these drugs since we could say that they got better results because they perform a higher number of measurements. 3) Mice has almost equal proportions in terms of genders: 51%Male, 49%Female. 4) When we compare Capomulin, Ramicane, Infubinol, and Ceftamin final tumor volume we observed that Infubinol data is influenced by a very large outlier: 36.321346. But again, Capomulin and Ramican outstand Infubinol and Ceftamin by having an approximate Maximum final tumor volume of 45~48. However, when we compared closely these two drugs we can identify that Ramican has a little more effectiveness than Capomulin Drug. 5) By randomly selecting a mouse that uses the Capomulin drug we were able to observe that in a time point from 0 to 45 the tumor volume decrease almost twice its size. This suggests that the drug has higher effectivenesss in reducing the tumor size. 6) There is very high correlation between Weight and Average Tumor Volume: 0.95, which could let use understand the reducing the weight of the tumor could be the path that lead us to achieved a higher effectiveness in the drugs we want to develop.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
from scipy.stats import sem
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
mouse_df = mouse_metadata.merge(study_results, on='Mouse ID')
# Display the data table for preview
mouse_df
# Checking the number of mice.
mouse_df['Mouse ID'].nunique()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
mouse_df[mouse_df.duplicated(subset=['Mouse ID', 'Timepoint'], keep=False)]
# Optional: Get all the data for the duplicate mouse ID.
duplicate = mouse_df[mouse_df['Mouse ID']== 'g989']
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
mouse_uniq = mouse_df.drop(duplicate.index)
# Checking the number of mice in the clean DataFrame.
mouse_uniq['Mouse ID'].nunique()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
drug_r = mouse_uniq.groupby('Drug Regimen')
d_mean = drug_r.mean()['Tumor Volume (mm3)']
d_median = drug_r.median()['Tumor Volume (mm3)']
d_var = drug_r.var()['Tumor Volume (mm3)']
d_std = drug_r.std()['Tumor Volume (mm3)']
d_sem = drug_r['Tumor Volume (mm3)'].sem()
summary_tv = pd.DataFrame({'Mean': d_mean,
'Median': d_median,
'Variance': d_var,
'Std Dev': d_std,
'SEM': d_sem})
summary_tv
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
drug_r.agg({'Tumor Volume (mm3)':['mean', 'median', 'var', 'std', 'sem']})
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
drug_r.agg({'Mouse ID':'count'}).sort_values(by='Mouse ID', ascending=False).plot(kind='bar')
plt.xlabel('Drug Regimen')
plt.ylabel('Number of Measurements')
plt.title('Number of Measurements by Drug Regimen')
plt.show()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
x = mouse_uniq['Drug Regimen'].value_counts()
y = mouse_uniq['Drug Regimen'].value_counts()
plt.bar(x.index, y, label='Mouse ID', width= 0.5)
plt.xlabel('Drug Regimen')
plt.ylabel('Number of Measurements')
plt.title('Number of Measurements by Drug Regimen')
plt.legend()
plt.xticks(rotation=90)
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
mouse_uniq['Sex'].value_counts().plot(kind='pie', autopct='%1.1f%%', startangle=140)
plt.ylabel('')
plt.title('Female vs Male Distribution')
plt.axis('equal')
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
values = mouse_uniq['Sex'].value_counts()
labels= values.index
plt.pie(values, labels=labels, autopct='%1.1f%%', startangle=140)
plt.title('Female vs Male Distribution')
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
drug_list = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
drug_df = mouse_uniq[mouse_uniq['Drug Regimen'].isin(drug_list)]
# Start by getting the last (greatest) timepoint for each mouse
mouse_max = drug_df.groupby(['Mouse ID']).max()[['Timepoint']]
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
mouse_max = mouse_max.reset_index()
drug_df = drug_df.merge(mouse_max, on=['Mouse ID', 'Timepoint'])
# Put treatments into a list for for loop (and later for plot labels)
drug_list = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
# Create empty list to fill with tumor vol data (for plotting)
tumor_vol = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
for drug in drug_list:
drug_data = drug_df.loc[drug_df['Drug Regimen'] == drug, 'Tumor Volume (mm3)']
tumor_vol.append(drug_data)
quartiles = drug_data.quantile([.25,.5,.75])
lowerq = quartiles[.25]
upperq = quartiles[.75]
iqr = upperq - lowerq
lower_bound = lowerq - (iqr * 1.5)
upper_bound = upperq + (iqr * 1.5)
outliers = drug_data[(drug_data > upper_bound) | (drug_data < lower_bound)]
print(outliers)
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
flierprops = dict(marker='o', markerfacecolor='blue', markersize=12,
linestyle='none')
fig1, ax1 = plt.subplots()
ax1.boxplot(tumor_vol, labels= drug_list, flierprops =flierprops)
ax1.set_title('Final Tumor Volume of Treatment Regimens')
ax1.set_ylabel('Tumor Volume (mm3)', fontsize=10)
ax1.set_xlabel('Drug Regimen', fontsize=10)
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
single_m = mouse_uniq[mouse_uniq['Mouse ID'] == 's185']
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
plt.plot(single_m['Timepoint'], single_m['Tumor Volume (mm3)'], color='red')
plt.title('Capomulin Results: Mouse s185')
plt.xlabel('Timepoint')
plt.ylabel('Tumor Volume (mm3)')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
from scipy.stats import linregress
import scipy.stats as st
# Create a df with only the Capomulin records
capomulin_df = mouse_uniq[mouse_uniq['Drug Regimen'] == 'Capomulin']
# Use groupby Weight and calculate the Mean of the Tumor Volume(mm3) then reset the index in order to create:
# x & y variables
capo_g = capomulin_df.groupby(['Weight (g)']).mean()[['Tumor Volume (mm3)']]
capo_g = capo_g.reset_index()
x_values = capo_g['Weight (g)']
y_values = capo_g['Tumor Volume (mm3)']
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
correlation = st.pearsonr(x_values, y_values)
print(f"The correlation between both factors is {round(correlation[0],2)}")
# Pass the x and y values into the lineregress function to create our linear regression model
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values, y_values)
plt.plot(x_values,regress_values,"r-")
plt.title('Weight and Average Tumor Volume Relationship')
plt.xlabel('Weight (g)')
plt.ylabel('Tumore Volume (mm3)')
plt.show()
print(f'Linear Regression Model: {line_eq}')
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
plt.scatter(x= x_values, y= y_values)
plt.title('Weight and Average Tumor Volume Relationship')
plt.xlabel('Weight (g)')
plt.ylabel('Tumore Volume (mm3)')
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
mouse_metadata.head()
study_results.head()
# Combine the data into a single dataset
mergedf = pd.merge(mouse_metadata,study_results,on="Mouse ID")
# Display the data table for preview
mergedf.head()
mergedf.shape
# Checking the number of mice.
len(mergedf["Mouse ID"].unique())
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
dupmouseid = mergedf[mergedf.duplicated(subset=['Mouse ID','Timepoint'])]
dupmouseid
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
cleandf = mergedf.loc[mergedf["Mouse ID"]!="g989"]
cleandf.tail()
# Checking the number of mice in the clean DataFrame.
len(cleandf["Mouse ID"].unique())
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
meanstat = cleandf.groupby("Drug Regimen"). mean()["Tumor Volume (mm3)"]
medianstat = cleandf.groupby("Drug Regimen"). median()["Tumor Volume (mm3)"]
variancestat = cleandf.groupby("Drug Regimen"). var()["Tumor Volume (mm3)"]
standard_deviationstat = cleandf.groupby("Drug Regimen"). std()["Tumor Volume (mm3)"]
SEMstat = cleandf.groupby("Drug Regimen"). sem()["Tumor Volume (mm3)"]
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
summarydf = pd.DataFrame({"Mean":meanstat,"Median":medianstat, "Variance":variancestat,
"Standard Deviation":standard_deviationstat, "SEM":SEMstat})
summarydf
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
summarystats = cleandf.groupby("Drug Regimen").agg({"Tumor Volume (mm3)":["mean","median","var","std","sem"]})
summarystats
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
totalmeasurment = cleandf.groupby("Drug Regimen")["Mouse ID"].count()
totalmeasurment.plot(kind="bar")
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
plt.barh(totalmeasurment.index,totalmeasurment,color="red")
# Generate a pie plot showing the distribution of female versus male mice using pandas
distgender = cleandf.groupby("Sex")["Mouse ID"].count()
distgender.plot(kind="pie")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(distgender, autopct="%1.1f%%", shadow=True, startangle=140)
plt.legend(distgender.index)
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
regmice = cleandf.groupby("Mouse ID").max()["Timepoint"]
regmice
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
mergeregmicedf = pd.merge(cleandf, regmice, on=["Mouse ID", "Timepoint"])
mergeregmicedf
# Put treatments into a list for for loop (and later for plot labels)
treatments =["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
# Create empty list to fill with tumor vol data (for plotting)
tumorvoldata = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for treatment in treatments:
# Locate the rows which contain mice on each drug and get the tumor volumes
micedrug = mergeregmicedf.loc[mergeregmicedf["Drug Regimen"] == treatment, "Tumor Volume (mm3)"]
# add subset
tumorvoldata.append(micedrug)
# Determine outliers using upper and lower bounds
quartiles = micedrug.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The the median of temperatures is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
print(f"outliers are {micedrug.loc[(micedrug<lower_bound)|(micedrug>upper_bound)]}")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plt.boxplot(tumorvoldata, labels= treatments)
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
mousetreated = cleandf.loc[cleandf["Drug Regimen"]=="Capomulin"]
mousetreated
mousedata = mousetreated.loc[cleandf["Mouse ID"]=="s185"]
mousedata
plt.plot(mousedata["Timepoint"],mousedata["Tumor Volume (mm3)"])
plt.show
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
mousetreated = cleandf.loc[cleandf["Drug Regimen"]=="Capomulin"]
mousetreated
avgtumorvol = mousetreated.groupby("Mouse ID")["Tumor Volume (mm3)"].mean()
avgtumorvol
mergedf = pd.merge(mouse_metadata,avgtumorvol, on =["Mouse ID"])
x= mergedf["Tumor Volume (mm3)"]
y= mergedf["Weight (g)"]
plt.scatter(x,y)
plt.show
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
# Print out the r-squared value along with the plot.
# Print out the r-squared value along with the plot.
x_values = mergedf['Tumor Volume (mm3)']
y_values = mergedf['Weight (g)']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.xlabel('Rooms in House')
plt.ylabel('Median House Prices ($1000)')
print(f"The r-squared is: {rvalue**2}")
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
mouse_metadata_path_df = pd.DataFrame(mouse_metadata)
study_results_path_df = pd.DataFrame(study_results)
combined_df = pd.merge(mouse_metadata_path_df, study_results_path_df, on = "Mouse ID",)
combined_df
# Combine the data into a single dataset
# Display the data table for preview
# Checking the number of mice.
mouse_count = len(combined_df["Mouse ID"].unique())
mouse_count_total = pd.DataFrame({"Mouse Count": [mouse_count]})
mouse_count_total
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# combined_df_dups = combined_df.duplicated()
# combined_dups = np.where(combined_df_dups == True)
# combined_dups.groupby["Mouse ID"]
clean_df = combined_df.drop_duplicates()
clean_df
# Optional: Get all the data for the duplicate mouse ID.
combined_df_dups = combined_df.duplicated()
combined_df_id = np.where(combined_df_dups == True)
dup_mouse = combined_df.iloc[combined_df_id]
dup_mouse
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = combined_df.drop_duplicates()
clean_df
# Checking the number of mice in the clean DataFrame.
mouse_count = len(combined_df["Mouse ID"].unique())
mouse_count
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
clean_df.groupby("Drug Regimen")["Mouse ID"].nunique().plot(kind="bar", title = "Mice per Treatment")
plt.ylabel("Count")
plt.show()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
treatment_name = (clean_df["Drug Regimen"].unique())
clean_df.groupby("Drug Regimen")["Mouse ID"].nunique()
mouse_per_treatment=[]
for treatment in treatment_name:
temp=clean_df.loc[clean_df["Drug Regimen"]==treatment]
mouse_per_treatment.append(len(np.unique(temp["Mouse ID"])))
mouse_per_treatment
x= treatment_name
height = mouse_per_treatment
plt.bar(x, height)
plt.xticks(rotation = "vertical")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
clean_df.groupby("Sex")["Mouse ID"].nunique().plot(kind="pie", autopct='%1.1f%%', shadow=True, startangle=45)
plt.xlabel("Male vs Female")
plt.ylabel("Count")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
treatment_name = (clean_df["Sex"].unique())
mouse_per_treatment=[]
for treatment in treatment_name:
temp=clean_df.loc[clean_df["Sex"]==treatment]
mouse_per_treatment.append(len(np.unique(temp["Mouse ID"])))
fig = plt.figure()
plt.pie(mouse_per_treatment, labels = treatment_name, autopct='%1.1f%%', shadow=True, startangle=45)
plt.xlabel("Male vs Female")
plt.ylabel("Count")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
random_mouse = clean_df[(clean_df == "j913").any(1)]
x = random_mouse["Timepoint"]
y = random_mouse["Tumor Volume (mm3)"]
# combined_df.plot(kind = "scatter", x = random_mouse["Timepoint"], y = random_mouse["Tumor Volume (mm3)"])
plt.plot(x,y,"r.")
plt.xlabel("Timepoint")
plt.show()
timepoint = clean_df.groupby(clean_df["Timepoint"])
tumor_volume = clean_df.groupby(clean_df["Tumor Volume (mm3)"])
# plt.scatter(timepoint, tumor_volume)
tumor_volume.head()
# plt.scatter(mouse_metadata_path_df[Timepoint], study_results_path_df["Tumor Volume (mm3)"])
# combined_df.plot(kind = "scatter", x = mouse_metadata_path_df["Timepoint"], y = study_results_path_df["Tumor Volume (mm3)"])
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
random_mouse = clean_df[(clean_df == "Ramicane").any(1)]
x = random_mouse["Tumor Volume (mm3)"]
y = random_mouse["Weight (g)"]
plt.scatter(x,y,)
plt.xlabel("Timepoint")
plt.show()
###Output
C:\Users\Yanwho\anaconda3\lib\site-packages\pandas\core\ops\array_ops.py:253: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
res_values = method(rvalues)
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
from scipy.stats import linregress
import numpy as np
from sklearn import datasets
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
combined_data = pd.merge(mouse_metadata,study_results, on="Mouse ID")
combined_data
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
combined_data.describe()
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
combined_data.plot.bar()
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
mice = combined_data["Sex"]
list_mice = []
m_mice = mice.str.count("Male").sum()
f_mice = mice.str.count("Female").sum()
#print(m_mice)
#print(f_mice)
list_mice.append(m_mice)
list_mice.append(f_mice)
labels = ['male' , 'female']
colors = ["red" , "blue"]
explode = (0.1, 0)
df = pd.DataFrame(list_mice)
#df.plot.pie(y=labels)
df = pd.DataFrame({'labels': labels,
'gender': list_mice},
index=['Male', 'Female'])
plot = df.plot.pie(y='gender', figsize=(5, 5))
###Output
_____no_output_____
###Markdown
plt.pie(list_mice, explode=explode, labels=labels, colors=colors, autopct="%1.1f%%", shadow=True, startangle=140) Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
my_rows = combined_data.loc[combined_data['Timepoint'] == 45]
sorted_rows = my_rows.sort_values("Tumor Volume (mm3)")
sorted_rows.head(4)
Q1 = sorted_rows["Tumor Volume (mm3)"].quantile(0.25)
Q3 = sorted_rows["Tumor Volume (mm3)"].quantile(0.75)
IQR = Q3 - Q1
print ('25 percentile : ' + str(Q1))
print ('75 percentile : ' + str(Q3))
print( ' IQR : ' + str(IQR))
my_outliers_low = combined_data.loc[combined_data['Tumor Volume (mm3)'] < Q1 - 1.5*IQR]
my_outliers_high = combined_data.loc[combined_data['Tumor Volume (mm3)'] > Q3 + 1.5*IQR]
if len(my_outliers_high ) == 0:
print ('no high outliers')
if len(my_outliers_low ) == 0:
print ('no low outliers')
#print (my_outliers_low)
#print(my_outliers_high)
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
top4 = sorted_rows.head(4)
#top4
top4_list = top4["Tumor Volume (mm3)"]
top4.boxplot(column="Tumor Volume (mm3)")
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
my_rows = combined_data.loc[combined_data['Drug Regimen'] == "Capomulin"]
my_rows_mouse = my_rows.loc[my_rows['Mouse ID'] == "s185"]
#my_rows_mouse
x_row = my_rows_mouse["Timepoint"]
y_col = my_rows_mouse["Tumor Volume (mm3)"]
#print(type(x_row))
plt.plot(x_row,y_col)
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
my_avg_rows = my_rows.groupby('Mouse ID')
my_avg_tumour = list(my_avg_rows["Weight (g)"])
my_rows_mouse = my_rows.loc[my_rows['Timepoint'] == "0"]
my_avg_weight = my_avg_rows["Tumor Volume (mm3)"].mean()
#print(my_avg_weight)
my_list = my_avg_weight.reset_index()
my_y_y = pd.DataFrame(my_list)
#print(my_y_y)
my_y = list(my_y_y["Tumor Volume (mm3)"])
#plt.plot(my_avg_weight, my_avg_tumour)
#print (my_y)
print(type(my_avg_tumour))
#print (len(my_y))
#a = [1,3,4,5,6]
#b = [10,30,40,50,60]
#plt.scatter(a,b)
#print(my_avg_tumour)
mylist = []
for elems in my_avg_tumour :
#print (elems)
my_str = str(elems)
mylist.append(my_str.split()[2])
#print(my_y)
plt.xlabel('weight')
plt.ylabel("tumor size")
plt.scatter(mylist, my_y)
my_dict = {}
for x,y in zip(mylist,my_y) :
my_dict[x] = y
#print (mylist.sort())
#my_list_s = mylist.sort()
#print(my_list_s)
#my_y_sorted = []
#for eachelem in my_list_s :
# my_y_sorted = my_dixt[eachelem]
#plt.xlabel('weight')
#plt.ylabel("tumor size")
#plt.scatter(my_list_s, my_y_sorted)
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
#print(type(mylist[0]))
float_list = []
for eachelem in mylist :
float_list.append(float(eachelem))
correlation = st.pearsonr(float_list, my_y)
#print (round(correlation[0],2))
print("The correlation between both factors is " + str(round(correlation[0],2)))
x_values = float_list
y_values = my_y
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
slope_f = (float(slope))
intercept_f = (float(intercept))
rvalue_f = (float(rvalue))
pvalue_f = (float(pvalue))
stderr_f = (float(stderr))
regress_values = []
for eachval in x_values :
regress_values.append(eachval * slope_f + intercept)
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.xlabel('weight')
plt.ylabel('tumour size')
plt.show()
###Output
The correlation between both factors is 0.84
###Markdown
Observations and Insights The first observations on the study, shows that Capomulin and Ramicane have similar successful results compared with the other drugs analized, having a Tumor Volume (mm^3) of 40.7 and 40.2 mm^3 of size in average during treatment, considering that every mouse began with a volume of 45 mm^3, this means that the Tumor decreased in size. The least successful drug was Ketapril, having 55.2 mm^3 of mean volume during treatment. Also, it was observed that there is an outlier measurement that is below the p25 – 1.5*iqr on one of the Infubinol measurements, as shown on the Box Plot made on the report. In general we found a negative correlation for the Capomulin drug of -0.58 being a Moderate correlation for Tumor Volume vs Timepoint, which means that the drug has moderate results, and tumors will get reduced with this drug. On the other hand, we saw that weight has a positive moderate correlation against tumor volume (0.53).
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined = study_results.merge( mouse_metadata, on="Mouse ID", how='left')
# Display the data table for preview
combined = combined.sort_values(['Mouse ID', 'Timepoint'], ascending=[True, True])
combined = combined.reset_index()
combined = combined.drop(combined.columns[0], axis=1)
print(combined.head(10))
# Checking the number of mice.
number_of_mice = len(pd.unique(mouse_metadata['Mouse ID']))
print(f"The number of mice is : {str(number_of_mice)}")
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
combined['duplicated'] = combined.duplicated(subset=['Mouse ID', 'Timepoint'], keep=False)
combined_dup = combined[combined['duplicated'] == True]
mice_with_duplicated_info = len(pd.unique(combined_dup['Mouse ID']))
duplicated_mouse_id = pd.unique(combined_dup['Mouse ID'])
print(f"The mouse with duplicated information is: {mice_with_duplicated_info} \
and the Mouse ID is: {duplicated_mouse_id}")
# Optional: Get all the data for the duplicate mouse ID.
combined_dup = combined_dup.drop(['duplicated'], axis=1)
combined_dup = combined_dup.reset_index()
combined_dup = combined_dup.drop(combined_dup.columns[0], axis=1)
print(combined_dup)
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
combined_no_dup = combined[combined['duplicated'] == False]
combined_no_dup = combined_no_dup[combined_no_dup['Mouse ID'] != duplicated_mouse_id[0]]
combined_no_dup = combined_no_dup.drop(['duplicated'], axis=1)
combined_no_dup = combined_no_dup.reset_index()
combined_no_dup = combined_no_dup.drop(combined_no_dup.columns[0], axis=1)
print(combined_no_dup)
# Checking the number of mice in the clean DataFrame.
number_of_mice_no_duplicated = len(pd.unique(combined_no_dup['Mouse ID']))
print(f"The number of mice is : {str(number_of_mice_no_duplicated)}")
###Output
The number of mice is : 248
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
#df = df.drop(df.columns[[0, 1, 3]], axis=1)
combined = combined_no_dup
combined_no_dup = combined_no_dup.drop(combined_no_dup.columns[0], axis=1)
summary_mean = combined_no_dup.groupby(['Drug Regimen']).mean()
summary_median = combined_no_dup.groupby(['Drug Regimen']).median()
summary_var = combined_no_dup.groupby(['Drug Regimen']).var()
summary_std = combined_no_dup.groupby(['Drug Regimen']).std()
summary_sem = combined_no_dup.groupby(['Drug Regimen']).sem(ddof=1)
# Assemble the resulting series into a single summary dataframe.
summary_statistics = {
'TV (mm3) mean': summary_mean['Tumor Volume (mm3)'],
'TV (mm3) median': summary_median['Tumor Volume (mm3)'],
'TV (mm3) Variance': summary_var['Tumor Volume (mm3)'],
'TV (mm3) StDev': summary_std['Tumor Volume (mm3)'],
'TV (mm3) SEM': summary_sem['Tumor Volume (mm3)']
}
summary_statistics_df = pd.DataFrame(summary_statistics)
summary_statistics_df
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
summary_agg = combined_no_dup.groupby('Drug Regimen')['Tumor Volume (mm3)'].agg(
[
np.mean,
np.median,
np.var,
np.std,
st.sem
]
)
print(' ')
print(' ***** Tumor Volume (mm3) *****')
print(' ')
print(summary_agg)
###Output
***** Tumor Volume (mm3) *****
mean median var std sem
Drug Regimen
Capomulin 40.675741 41.557809 24.947764 4.994774 0.329346
Ceftamin 52.591172 51.776157 39.290177 6.268188 0.469821
Infubinol 52.884795 51.820584 43.128684 6.567243 0.492236
Ketapril 55.235638 53.698743 68.553577 8.279709 0.603860
Naftisol 54.331565 52.509285 66.173479 8.134708 0.596466
Placebo 54.033581 52.288934 61.168083 7.821003 0.581331
Propriva 52.320930 50.446266 43.852013 6.622085 0.544332
Ramicane 40.216745 40.673236 23.486704 4.846308 0.320955
Stelasyn 54.233149 52.431737 59.450562 7.710419 0.573111
Zoniferol 53.236507 51.818479 48.533355 6.966589 0.516398
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
measurements_taken = combined_no_dup["Drug Regimen"].value_counts()
measurements_taken.plot.bar()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
labels = measurements_taken.index.tolist()
measurements = measurements_taken.tolist()
def make_autopct(values):
def my_autopct(pct):
total = sum(values)
val = int(round(pct*total/100.0))
return '{p:.2f}% ({v:d})'.format(p=pct,v=val)
return my_autopct
plt.pie(measurements,labels=labels, autopct=make_autopct(measurements), shadow=True, radius= 2.5)
# Generate a pie plot showing the distribution of female versus male mice using pandas
mice_sex = combined_no_dup["Sex"].value_counts()
mice_sex.plot(kind='pie',autopct='%1.1f%%')
# Generate a pie plot showing the distribution of female versus male mice using pyplot
sex_labels = mice_sex.index.tolist()
sex_count = mice_sex.tolist()
plt.pie(sex_count, labels = sex_labels, autopct="%1.1f%%", shadow=True )
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
drugs = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
filtered_combined = combined[combined['Drug Regimen'].isin(drugs)]
# Start by getting the last (greatest) timepoint for each mouse
index_max = filtered_combined.groupby(['Mouse ID'])['Timepoint'].transform(max)==filtered_combined['Timepoint']
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
filtered_combined_max = filtered_combined[index_max]
print(filtered_combined_max)
# Put treatments into a list for for loop (and later for plot labels)
print(drugs) # I assume that it means the list generated before.
# Create empty list to fill with tumor vol data (for plotting)
tumor_vol = []
for drug in drugs:
# Locate the rows which contain mice on each drug and get the tumor volumes
drug_df = filtered_combined_max['Drug Regimen'] ==drug
tumor_vol = filtered_combined_max[drug_df]['Tumor Volume (mm3)']
# Calculate the IQR and quantitatively determine if there are any potential outliers.
quartiles = tumor_vol.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
# add subset
# Determine outliers using upper and lower bounds
print(" ")
print(f"The lower quartile of {drug} is: {lowerq}")
print(f"The upper quartile of {drug} is: {upperq}")
print(f"The interquartile range of {drug} is: {iqr}")
print(f"The the median of {drug} is: {quartiles[0.5]} ")
fig1, ax1 = plt.subplots()
ax1.set_title(f'Tumor Volume for {drug}')
ax1.set_ylabel('mm3')
ax1.boxplot(tumor_vol)
plt.show()
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
# (check above)
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
mouse_treated_capomulin = combined_no_dup[combined_no_dup['Drug Regimen']=='Capomulin']
tv = mouse_treated_capomulin['Tumor Volume (mm3)']
timepoint = mouse_treated_capomulin['Timepoint']
plt.scatter(tv,timepoint, facecolors="red", edgecolors="black", alpha=0.75)
correlation = st.pearsonr(tv,timepoint)
print(f"The correlation between both factors is {round(correlation[0],2)}")
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
mouse_treated_capomulin = combined_no_dup[combined_no_dup['Drug Regimen']=='Capomulin']
tv = mouse_treated_capomulin['Tumor Volume (mm3)']
weight = mouse_treated_capomulin['Weight (g)']
plt.scatter(tv,weight, facecolors="red", edgecolors="black", alpha=0.75)
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
correlation = st.pearsonr(weight,tv)
print(f"The correlation between both factors is {round(correlation[0],2)}")
from scipy.stats import linregress
(slope, intercept, rvalue, pvalue, stderr) = linregress(weight, tv)
regress_values = weight * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(weight,tv)
plt.plot(weight,regress_values,"r-")
plt.annotate(line_eq,(7,11),fontsize=15,color="red")
plt.xlabel('Weight (g)')
plt.ylabel('Tumor Volume (mm3)')
plt.show()
print(f"The linear regression model is {line_eq}")
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
study_result=pd.merge(study_results,mouse)
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
from scipy.stats import linregress
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
#Mouse Data Set
mouse_metadata.head()
#Results Data Sets
study_results.head()
#Combined Data Frames
studied_mice = pd.merge(mouse_metadata,study_results, on = "Mouse ID", how="outer")
studied_mice.head()
# Checking the number of mice.
# There are 249 subjects (mice)
number_of_mice=len(mouse_metadata["Mouse ID"].unique())
print(f"There are {number_of_mice} mice for measures.")
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_df=studied_mice[studied_mice.duplicated(subset=["Mouse ID","Timepoint"])]
duplicated_mice = len(duplicate_df["Mouse ID"].unique())
duplicated_mice_serie = pd.Series(duplicate_df["Mouse ID"].unique())
print(f"There are {duplicated_mice} duplicate mice for several measures.")
print(f"Hello {duplicated_mice_serie[0]}")
duplicate_df
# Optional: Get all the data for the duplicate mouse ID.
mice_df = studied_mice.loc[studied_mice["Mouse ID"] != "g989",:]
mice_df
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
mice_df
# Checking the number of mice in the clean DataFrame.
mices = len(mice_df["Mouse ID"].unique())
print(f"There are {mices} mice for tests.")
tests = mice_df["Mouse ID"].count()
print(f"They run {tests} tests on the mice.")
###Output
There are 248 mice for tests.
They run 1880 tests on the mice.
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
drug_regimen = mice_df[["Drug Regimen","Tumor Volume (mm3)"]]
groupby_drug_regimen = drug_regimen.groupby("Drug Regimen")
groupby_drug_regimen.describe()
drug_regimen = mice_df[["Drug Regimen","Tumor Volume (mm3)"]]
drug_regimen_capomulin = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Capomulin"]
drug_regimen_ceftamin = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Ceftamin"]
drug_regimen_infubinol = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Infubinol"]
drug_regimen_ketapril = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Ketapril"]
drug_regimen_naftisol = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Naftisol"]
drug_regimen_placebo = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Placebo"]
drug_regimen_propriva = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Propriva"]
drug_regimen_ramicane = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Ramicane"]
drug_regimen_stelasyn = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Stelasyn"]
drug_regimen_zoniferol = drug_regimen.loc[drug_regimen["Drug Regimen"]=="Zoniferol"]
#Capomulin
capomulin_mean = np.mean(drug_regimen_capomulin["Tumor Volume (mm3)"])
capomulin_median = np.median(drug_regimen_capomulin["Tumor Volume (mm3)"])
capomulin_mode = st.mode(drug_regimen_capomulin["Tumor Volume (mm3)"])
capomulin_variance = np.var(drug_regimen_capomulin["Tumor Volume (mm3)"],ddof=0)
capomulin_std = np.std(drug_regimen_capomulin["Tumor Volume (mm3)"],ddof=0)
capomulin_sem = drug_regimen_capomulin["Tumor Volume (mm3)"].sem()
#Ceftamin
ceftamin_mean = np.mean(drug_regimen_ceftamin["Tumor Volume (mm3)"])
ceftamin_median = np.median(drug_regimen_ceftamin["Tumor Volume (mm3)"])
ceftamin_mode = st.mode(drug_regimen_ceftamin["Tumor Volume (mm3)"])
ceftamin_variance = np.var(drug_regimen_ceftamin["Tumor Volume (mm3)"],ddof=0)
ceftamin_std = np.std(drug_regimen_ceftamin["Tumor Volume (mm3)"],ddof=0)
ceftamin_sem = drug_regimen_ceftamin["Tumor Volume (mm3)"].sem()
#Infubinol
infubinol_mean = np.mean(drug_regimen_infubinol["Tumor Volume (mm3)"])
infubinol_median = np.median(drug_regimen_infubinol["Tumor Volume (mm3)"])
infubinol_mode = st.mode(drug_regimen_infubinol["Tumor Volume (mm3)"])
infubinol_variance = np.var(drug_regimen_infubinol["Tumor Volume (mm3)"],ddof=0)
infubinol_std = np.std(drug_regimen_infubinol["Tumor Volume (mm3)"],ddof=0)
infubinol_sem = drug_regimen_infubinol["Tumor Volume (mm3)"].sem()
#Ketapril
ketapril_mean = np.mean(drug_regimen_ketapril["Tumor Volume (mm3)"])
ketapril_median = np.median(drug_regimen_ketapril["Tumor Volume (mm3)"])
ketapril_mode = st.mode(drug_regimen_ketapril["Tumor Volume (mm3)"])
ketapril_variance = np.var(drug_regimen_ketapril["Tumor Volume (mm3)"],ddof=0)
ketapril_std = np.std(drug_regimen_ketapril["Tumor Volume (mm3)"],ddof=0)
ketapril_sem = drug_regimen_ketapril["Tumor Volume (mm3)"].sem()
#Naftisol
naftisol_mean = np.mean(drug_regimen_naftisol["Tumor Volume (mm3)"])
naftisol_median = np.median(drug_regimen_naftisol["Tumor Volume (mm3)"])
naftisol_mode = st.mode(drug_regimen_naftisol["Tumor Volume (mm3)"])
naftisol_variance = np.var(drug_regimen_naftisol["Tumor Volume (mm3)"],ddof=0)
naftisol_std = np.std(drug_regimen_naftisol["Tumor Volume (mm3)"],ddof=0)
naftisol_sem = drug_regimen_naftisol["Tumor Volume (mm3)"].sem()
#Placebo
placebo_mean = np.mean(drug_regimen_placebo["Tumor Volume (mm3)"])
placebo_median = np.median(drug_regimen_placebo["Tumor Volume (mm3)"])
placebo_mode = st.mode(drug_regimen_placebo["Tumor Volume (mm3)"])
placebo_variance = np.var(drug_regimen_placebo["Tumor Volume (mm3)"],ddof=0)
placebo_std = np.std(drug_regimen_placebo["Tumor Volume (mm3)"],ddof=0)
placebo_sem = drug_regimen_placebo["Tumor Volume (mm3)"].sem()
#Propriva
propriva_mean = np.mean(drug_regimen_propriva["Tumor Volume (mm3)"])
propriva_median = np.median(drug_regimen_propriva["Tumor Volume (mm3)"])
propriva_mode = st.mode(drug_regimen_propriva["Tumor Volume (mm3)"])
propriva_variance = np.var(drug_regimen_propriva["Tumor Volume (mm3)"],ddof=0)
propriva_std = np.std(drug_regimen_propriva["Tumor Volume (mm3)"],ddof=0)
propriva_sem = drug_regimen_propriva["Tumor Volume (mm3)"].sem()
#Ramicane
ramicane_mean = np.mean(drug_regimen_ramicane["Tumor Volume (mm3)"])
ramicane_median = np.median(drug_regimen_ramicane["Tumor Volume (mm3)"])
ramicane_mode = st.mode(drug_regimen_ramicane["Tumor Volume (mm3)"])
ramicane_variance = np.var(drug_regimen_ramicane["Tumor Volume (mm3)"],ddof=0)
ramicane_std = np.std(drug_regimen_ramicane["Tumor Volume (mm3)"],ddof=0)
ramicane_sem = drug_regimen_ramicane["Tumor Volume (mm3)"].sem()
#Stelasyn
stelasyn_mean = np.mean(drug_regimen_stelasyn["Tumor Volume (mm3)"])
stelasyn_median = np.median(drug_regimen_stelasyn["Tumor Volume (mm3)"])
stelasyn_mode = st.mode(drug_regimen_stelasyn["Tumor Volume (mm3)"])
stelasyn_variance = np.var(drug_regimen_stelasyn["Tumor Volume (mm3)"],ddof=0)
stelasyn_std = np.std(drug_regimen_stelasyn["Tumor Volume (mm3)"],ddof=0)
stelasyn_sem = drug_regimen_stelasyn["Tumor Volume (mm3)"].sem()
#Zoniferol
zoniferol_mean = np.mean(drug_regimen_zoniferol["Tumor Volume (mm3)"])
zoniferol_median = np.median(drug_regimen_zoniferol["Tumor Volume (mm3)"])
zoniferol_mode = st.mode(drug_regimen_zoniferol["Tumor Volume (mm3)"])
zoniferol_variance = np.var(drug_regimen_zoniferol["Tumor Volume (mm3)"],ddof=0)
zoniferol_std = np.std(drug_regimen_zoniferol["Tumor Volume (mm3)"],ddof=0)
zoniferol_sem = drug_regimen_zoniferol["Tumor Volume (mm3)"].sem()
drug_regimen_df = pd.DataFrame({"Mean":[capomulin_mean,ceftamin_mean,infubinol_mean,ketapril_mean,naftisol_mean,
placebo_mean,propriva_mean,ramicane_mean,stelasyn_mean,zoniferol_mean],
"Median":[capomulin_median,ceftamin_median,infubinol_median,ketapril_median,naftisol_median,
placebo_median,propriva_median,ramicane_median,stelasyn_median,zoniferol_median],
"Variance":[capomulin_variance,ceftamin_variance,infubinol_variance,ketapril_variance,naftisol_variance,
placebo_variance,propriva_variance,ramicane_variance,stelasyn_variance,zoniferol_variance],
"Standard Deviation":[capomulin_std,ceftamin_std,infubinol_std,ketapril_std,naftisol_std,
placebo_std,propriva_std,ramicane_std,stelasyn_std,zoniferol_std],
"SEM":[capomulin_sem,ceftamin_sem,infubinol_sem,ketapril_sem,naftisol_sem,placebo_sem,
propriva_sem,ramicane_sem,stelasyn_sem,zoniferol_sem]})
drug_regimen_df.rename(index={0:"Capomulin",1:"Ceftamin",2:"Infubinol",3:"Ketapril",4:"Naftisol",
5:"Placebo",6:"Propriva",7:"Ramicane",8:"Stelasyn",9:"Zoniferol"}, inplace=True)
drug_regimen_df["Mean"] = drug_regimen_df["Mean"].map("{:.2f}".format)
drug_regimen_df["Median"] = drug_regimen_df["Median"].map("{:.2f}".format)
drug_regimen_df["Variance"] = drug_regimen_df["Variance"].map("{:.2f}".format)
drug_regimen_df["Standard Deviation"] = drug_regimen_df["Standard Deviation"].map("{:.2f}".format)
drug_regimen_df["SEM"] = drug_regimen_df["SEM"].map("{:.2f}".format)
drug_regimen_df = drug_regimen_df.style.set_caption("Summary Statistics Of The Tumor Volume For Each Regimen")
drug_regimen_df
grouped_by_drug_regimen = mice_df.groupby("Drug Regimen")
drug_regimen_tumor_volume = pd.DataFrame({"Mean":grouped_by_drug_regimen["Tumor Volume (mm3)"].mean(),
"Median":grouped_by_drug_regimen["Tumor Volume (mm3)"].median(),
"Variance":grouped_by_drug_regimen["Tumor Volume (mm3)"].var(),
"Standard Deviation":grouped_by_drug_regimen["Tumor Volume (mm3)"].std(),
"SEM":grouped_by_drug_regimen["Tumor Volume (mm3)"].sem()})
drug_regimen_tumor_volume["Mean"] = drug_regimen_tumor_volume["Mean"].map("{:.2f}".format)
drug_regimen_tumor_volume["Median"] = drug_regimen_tumor_volume["Median"].map("{:.2f}".format)
drug_regimen_tumor_volume["Variance"] = drug_regimen_tumor_volume["Variance"].map("{:.2f}".format)
drug_regimen_tumor_volume["Standard Deviation"] = drug_regimen_tumor_volume["Standard Deviation"].map("{:.2f}".format)
drug_regimen_tumor_volume["SEM"] = drug_regimen_tumor_volume["SEM"].map("{:.2f}".format)
drug_regimen_tumor_volume
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
drug_regimen_agg = drug_regimen.groupby(['Drug Regimen'])["Tumor Volume (mm3)"].agg([np.mean,np.median,np.var,np.std,st.sem])
# Using the aggregation method, produce the same summary statistics in a single line
#Formating
drug_regimen_agg = drug_regimen_agg.rename(columns={"mean":"Mean","median":"Median","var":"Variance","std":"Standard Deviation","sem":"SEM"})
drug_regimen_agg["Mean"] = drug_regimen_agg["Mean"].map("{:.2f}".format)
drug_regimen_agg["Median"] = drug_regimen_agg["Median"].map("{:.2f}".format)
drug_regimen_agg["Variance"] = drug_regimen_agg["Variance"].map("{:.2f}".format)
drug_regimen_agg["Standard Deviation"] = drug_regimen_agg["Standard Deviation"].map("{:.2f}".format)
drug_regimen_agg["SEM"] = drug_regimen_agg["SEM"].map("{:.2f}".format)
drug_regimen_agg
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
drug_regimen_panda_plot = pd.DataFrame(mice_df["Drug Regimen"].value_counts())
facecolor = ["red","orange","gold","yellow","green","blue","skyblue","violet","purple","hotpink"]
drug_regimen_panda=drug_regimen_panda_plot.plot.bar(figsize=(11,6),width = 0.5,align="center", colormap = "Paired", legend = None)
drug_regimen_panda.set_title("Total Number of Measurements")
drug_regimen_panda.set_xlabel("Drug Regimen")
drug_regimen_panda.set_ylabel("Number of Tests")
plt.xlim(-0.75, len(drug_regimen_panda_plot)-0.25)
plt.ylim(0, max(mice_df["Drug Regimen"].value_counts())+20)
plt.show()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
plt.figure(figsize=(11,6))
facecolors = ["red","orange","gold","yellow","green","blue","skyblue","violet","purple","hotpink"]
number_of_tests_df=mice_df.groupby("Drug Regimen")
test_number = number_of_tests_df["Mouse ID"].count()
test_number = test_number.sort_values(ascending=False)
available_drugs = test_number.index.values.tolist()
available_drugs
tick_locations = []
x_axis = np.arange(0,len(available_drugs))
number_of_tests_df = grouped_by_drug_regimen.count()
for x in x_axis:
tick_locations.append(x)
plt.title("Total Number of Measurements")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Tests")
plt.xlim(-0.75, len(available_drugs)-0.25)
plt.ylim(0, max(test_number)+20)
plt.bar(available_drugs,test_number,color = facecolors ,alpha=0.75,align ="center",width=0.5)
plt.xticks(tick_locations,available_drugs)
plt.show()
number_of_tests_df["Mouse ID"]
number_of_tests_df=mice_df.groupby("Drug Regimen")
number_of_tests = number_of_tests_df["Mouse ID"].count()
number_of_tests
###Output
_____no_output_____
###Markdown
number_of_tests_df=mice_df.groupby("Drug Regimen")number_of_tests_df
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
sex_of_mice = mice_df["Sex"].value_counts()
mice_sex = pd.DataFrame(sex_of_mice)
explode = (0.1,0)
mice_sex_plot = mice_sex.plot(kind = "pie", y ="Sex", title ="Male vs Female (Pandas)",autopct = "%.2f%%",
colors = ['skyblue', 'violet'],shadow=True,startangle = 90, explode = explode )
plt.axis("equal")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
unique_sexs = mice_df["Sex"].unique()
sex_list = []
genders =[]
sex_of_mice = mice_df["Sex"].value_counts()
mice_sex_df = pd.DataFrame(sex_of_mice)
colors = ["springgreen","hotpink"]
explode = (0.1,0)
for x in sex_of_mice:
sex_list.append(x)
for y in unique_sexs:
genders.append(y)
plt.title("Male vs Female (PyPlot)")
plt.ylabel("Sex")
plt.pie(sex_list, explode=explode, labels = genders, colors = colors, autopct="%.2f%%",shadow=True,startangle = 90)
plt.legend(loc='upper right')
plt.axis("equal")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
mice_df.head()
timepoint_list = []
four_treatments_df = mice_df.loc[(mice_df["Drug Regimen"]=="Capomulin")|(mice_df["Drug Regimen"]=="Ramicane")|(mice_df["Drug Regimen"]=="Infubinol")|(mice_df["Drug Regimen"]=="Ceftamin"),:]
mice_treatment = four_treatments_df.groupby("Mouse ID")
for x in mice_treatment["Timepoint"].max():
timepoint_list.append(x)
max_tp=mice_treatment[["Timepoint"]].max()
merge = pd.merge(max_tp,four_treatments_df,on="Mouse ID")
merge_clean = merge.loc[merge["Timepoint_x"]==merge["Timepoint_y"],:]
merge_clean
mice_df.head()
last_timepoint_list = []
last_tp_volume_list = []
four_treatments_df = mice_df.loc[(mice_df["Drug Regimen"]=="Capomulin")|(mice_df["Drug Regimen"]=="Ramicane")|(mice_df["Drug Regimen"]=="Infubinol")|(mice_df["Drug Regimen"]=="Ceftamin"),:]
mice_treatment = four_treatments_df.groupby("Mouse ID")
for x in mice_treatment["Timepoint"].max():
last_timepoint_list.append(x)
max_tp=mice_treatment[["Timepoint"]].max()
all_merge_timepoint = pd.merge(max_tp,four_treatments_df,on="Mouse ID")
clean_merge = all_merge_timepoint.loc[all_merge_timepoint["Timepoint_x"]==all_merge_timepoint["Timepoint_y"],:]
clean_merge = clean_merge[["Mouse ID","Drug Regimen","Sex","Age_months","Weight (g)","Timepoint_y",
"Tumor Volume (mm3)","Metastatic Sites"]]
clean_merge
for y in clean_merge["Tumor Volume (mm3)"]:
last_tp_volume_list.append(f"{y:.2f}")
clean_merge.head(20)
# Treatments in a List
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
clean_merge
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
list_of_treatments = ["Capomulin", "Ramicane", "Infubinol","Ceftamin"]
lower_quartiles = []
mid_quartiles = []
upper_quartiles = []
iqrs = []
lower_bounds = []
upper_bounds =[]
tumor_volumes_per_drug = []
outliners =[]
for drug in list_of_treatments:
specific_drug = clean_merge.loc[clean_merge["Drug Regimen"]==drug,:]
specific_drug_tumor_volume = specific_drug["Tumor Volume (mm3)"]
tumor_volumes_per_drug.append(specific_drug_tumor_volume)
quartiles = specific_drug_tumor_volume.quantile([0.25,0.5,0.75])
lower_quartile=quartiles[0.25]
mid_quartile =quartiles[0.5]
upper_quartile=quartiles[0.75]
iqr = upper_quartile - lower_quartile
lower_bound = lower_quartile - (1.5 * iqr)
upper_bound = upper_quartile + (1.5 * iqr)
lower_quartiles.append(lower_quartile)
mid_quartiles.append(mid_quartile)
upper_quartiles.append(upper_quartile)
iqrs.append(iqr)
lower_bounds.append(lower_bound)
upper_bounds.append(upper_bound)
counter = 0
for mice in specific_drug_tumor_volume:
if mice > upper_bound or mice < lower_bound:
print(f"Outliners detected in {drug}: {mice}")
outliners.append(mice)
counter = counter + 1
if counter == 25:
print(f"None Outliners detected in {drug}")
summary_statistics_df = pd.DataFrame({"Drug Regimen":list_of_treatments,"Lower Quartiles":lower_quartiles,
"Mid Quartiles":mid_quartiles,"Upper Quartiles":upper_quartiles,"IQR":iqrs,
"Lower Bounds":lower_bounds,"Upper Bounds":upper_bounds})
summary_statistics_df.set_index(["Drug Regimen"])
x_axis = list_of_treatments
ticks_locs =[]
fig1, ax1 = plt.subplots(figsize=(12,6))
boxprops = dict(linestyle='-', linewidth=2, color='deepskyblue')
flierprops = dict(marker='o', markerfacecolor='red', markersize=12,
markeredgecolor='none')
medianprops = dict(linestyle='-', linewidth=2, color='gold')
for x in range(1,len(x_axis)+1):
ticks_locs.append(x)
ax1.set_title('Final Tumor Volume vs Drug Regimen')
ax1.set_ylabel("Tumor Volume")
ax1.set_xlabel("Interest Drug Regimen")
ax1.boxplot(tumor_volumes_per_drug, boxprops=boxprops,flierprops=flierprops,medianprops=medianprops)
plt.xticks(ticks_locs, x_axis)
plt.hlines(45,0,len(x_axis)+1, alpha=0.5)
#Interest
plt.show()
list_of_treatments
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
capomulin_df = mice_df.loc[mice_df["Drug Regimen"]=="Capomulin"]
capomulin_groupby = capomulin_df.groupby("Timepoint")
campomulin_behavior = capomulin_groupby["Tumor Volume (mm3)"].mean()
campomulin_list_index = campomulin_behavior.index.values.tolist()
plt.figure(figsize=(12,6))
plt.plot(campomulin_list_index,campomulin_behavior, color="chartreuse",label="Tumor's Behavior")
plt.title("Capomulin: Tumor Volume vs Time Point")
plt.ylabel("Tumor Volume")
plt.xlabel("Timepoint")
plt.xlim(-5,50)
plt.ylim(36,45.25)
plt.legend(loc="best")
plt.grid()
plt.show()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
capomulin_gb_weight = capomulin_df.groupby("Weight (g)")
capomulin_weight_vs_volume = capomulin_gb_weight["Tumor Volume (mm3)"].mean()
capomulin_weight_vs_volume_index = capomulin_weight_vs_volume.index.values.tolist()
capomulin_weight_vs_volume_index= [float(i) for i in capomulin_weight_vs_volume_index]
capomulin_weight_vs_volume_serie = pd.Series(capomulin_weight_vs_volume_index)
x_values = capomulin_weight_vs_volume_serie
y_values = capomulin_weight_vs_volume
(slope,intercept,rvalue,pvalue,stderr) = linregress(x_values,y_values)
regress_values = x_values * slope + intercept
line_eq = "y = "+str(round(slope,2))+"x + "+str(round(intercept,2))
plt.scatter(x_values,y_values, color="red")
plt.plot(x_values,regress_values,color="orange")
plt.title("Capomulin Scatter Plot: Average Tumor Volume vs Mouse Weight")
plt.xlabel("Weight")
plt.ylabel("Tumor Volume")
plt.annotate(line_eq,(20,40),fontsize=15, color="red")
plt.ylim(35.5,46)
plt.xlim(14.5,25.5)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
print(x_values)
print(slope)
print(intercept)
example_serie
line_eq
mice_df
capomulin_df = mice_df.loc[mice_df["Drug Regimen"]=="Capomulin"]
capomulin_data = capomulin_df[["Weight (g)","Tumor Volume (mm3)"]]
capomulin_volume = capomulin_data["Tumor Volume (mm3)"]
capomulin_weight = capomulin_data["Weight (g)"]
x_values = capomulin_weight
y_values = capomulin_volume
(slope,intercept,rvalue,pvalue,stderr) = linregress(x_values,y_values)
regress_values = x_values * slope + intercept
line_eq = "y = "+str(round(slope,2))+"x + "+str(round(intercept,2))
plt.scatter(x_values,y_values, color="red")
plt.plot(x_values,regress_values,color="orange")
plt.title("Capomulin Scatter Plot: Average Tumor Volume vs Mouse Weight")
plt.xlabel("Weight")
plt.ylabel("Tumor Volume")
plt.annotate(line_eq,(20,40),fontsize=15, color="red")
plt.ylim(20,60)
plt.xlim(14.5,25.5)
plt.grid()
plt.show()
capomulin_weight_vs_volume
rvalue**2
capomulin_seventeen = capomulin_data.loc[capomulin_data["Weight (g)"]==15]
capomulin_seventeen["Tumor Volume (mm3)"].mean()
###Output
_____no_output_____ |
4. CIFAR10/cifar.ipynb | ###Markdown
Classifying images of everyday objects using a neural networkThe ability to try many different neural network architectures to address a problem is what makes deep learning really powerful, especially compared to shallow learning techniques like linear regression, logistic regression etc. In this assignment, you will:1. Explore the CIFAR10 dataset: https://www.cs.toronto.edu/~kriz/cifar.html2. Set up a training pipeline to train a neural network on a GPU2. Experiment with different network architectures & hyperparametersAs you go through this notebook, you will find a **???** in certain places. Your job is to replace the **???** with appropriate code or values, to ensure that the notebook runs properly end-to-end. Try to experiment with different network structures and hypeparameters to get the lowest loss.You might find these notebooks useful for reference, as you work through this notebook:- https://jovian.ml/aakashns/04-feedforward-nn- https://jovian.ml/aakashns/fashion-feedforward-minimal
###Code
# Uncomment and run the commands below if imports fail
# !conda install numpy pandas pytorch torchvision cpuonly -c pytorch -y
# !pip install matplotlib --upgrade --quiet
import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.nn.functional as F
from torchvision.datasets import CIFAR10
from torchvision.transforms import ToTensor
from torchvision.utils import make_grid
from torch.utils.data.dataloader import DataLoader
from torch.utils.data import random_split
%matplotlib inline
# Project name used for jovian.commit
project_name = '03-cifar10-feedforward'
###Output
_____no_output_____
###Markdown
Exploring the CIFAR10 dataset
###Code
dataset = CIFAR10(root='data/', download=True, transform=ToTensor())
test_dataset = CIFAR10(root='data/', train=False, transform=ToTensor())
###Output
Files already downloaded and verified
###Markdown
**Q: How many images does the training dataset contain?**
###Code
dataset_size = len(dataset)
dataset_size
###Output
_____no_output_____
###Markdown
**Q: How many images does the training dataset contain?**
###Code
test_dataset_size = len(test_dataset)
test_dataset_size
###Output
_____no_output_____
###Markdown
**Q: How many output classes does the dataset contain? Can you list them?**Hint: Use `dataset.classes`
###Code
classes = dataset.classes
classes
num_classes = len(classes)
num_classes
###Output
_____no_output_____
###Markdown
**Q: What is the shape of an image tensor from the dataset?**
###Code
img, label = dataset[0]
img_shape = img.shape
img_shape
###Output
_____no_output_____
###Markdown
Note that this dataset consists of 3-channel color images (RGB). Let us look at a sample image from the dataset. `matplotlib` expects channels to be the last dimension of the image tensors (whereas in PyTorch they are the first dimension), so we'll the `.permute` tensor method to shift channels to the last dimension. Let's also print the label for the image.
###Code
img, label = dataset[0]
plt.imshow(img.permute((1, 2, 0)))
print('Label (numeric):', label)
print('Label (textual):', classes[label])
###Output
Label (numeric): 6
Label (textual): frog
###Markdown
**(Optional) Q: Can you determine the number of images belonging to each class?**Hint: Loop through the dataset.
###Code
img_class_count = {}
for tensor, index in dataset:
x = classes[index]
if x not in img_class_count:
img_class_count[x] = 1
else:
img_class_count[x] += 1
print(img_class_count)
###Output
{'frog': 5000, 'truck': 5000, 'deer': 5000, 'automobile': 5000, 'bird': 5000, 'horse': 5000, 'ship': 5000, 'cat': 5000, 'dog': 5000, 'airplane': 5000}
###Markdown
Let's save our work to Jovian, before continuing.
###Code
!pip install jovian --upgrade --quiet
import jovian
jovian.commit(project=project_name, environment=None)
###Output
_____no_output_____
###Markdown
Preparing the data for trainingWe'll use a validation set with 5000 images (10% of the dataset). To ensure we get the same validation set each time, we'll set PyTorch's random number generator to a seed value of 43.
###Code
torch.manual_seed(43)
val_size = 5000
train_size = len(dataset) - val_size
###Output
_____no_output_____
###Markdown
Let's use the `random_split` method to create the training & validation sets
###Code
train_ds, val_ds = random_split(dataset, [train_size, val_size])
len(train_ds), len(val_ds)
###Output
_____no_output_____
###Markdown
We can now create data loaders to load the data in batches.
###Code
batch_size=128
train_loader = DataLoader(train_ds, batch_size, shuffle=True, num_workers=4, pin_memory=True)
val_loader = DataLoader(val_ds, batch_size*2, num_workers=4, pin_memory=True)
test_loader = DataLoader(test_dataset, batch_size*2, num_workers=4, pin_memory=True)
###Output
_____no_output_____
###Markdown
Let's visualize a batch of data using the `make_grid` helper function from Torchvision.
###Code
for images, _ in train_loader:
print('images.shape:', images.shape)
plt.figure(figsize=(16,8))
plt.axis('off')
plt.imshow(make_grid(images, nrow=16).permute((1, 2, 0)))
break
###Output
images.shape: torch.Size([128, 3, 32, 32])
###Markdown
Can you label all the images by looking at them? Trying to label a random sample of the data manually is a good way to estimate the difficulty of the problem, and identify errors in labeling, if any. Base Model class & Training on GPULet's create a base model class, which contains everything except the model architecture i.e. it wil not contain the `__init__` and `__forward__` methods. We will later extend this class to try out different architectures. In fact, you can extend this model to solve any image classification problem.
###Code
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim=1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
class ImageClassificationBase(nn.Module):
def training_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
return loss
def validation_step(self, batch):
with torch.no_grad():
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
acc = accuracy(out, labels) # Calculate accuracy
return {'val_loss': loss.detach(), 'val_acc': acc} # detached loss function
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
batch_accs = [x['val_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean() # Combine accuracies
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], val_loss: {:.4f}, val_acc: {:.4f}".format(epoch, result['val_loss'], result['val_acc']))
###Output
_____no_output_____
###Markdown
We can also use the exact same training loop as before. I hope you're starting to see the benefits of refactoring our code into reusable functions.
###Code
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result)
history.append(result)
return history
###Output
_____no_output_____
###Markdown
Finally, let's also define some utilities for moving out data & labels to the GPU, if one is available.
###Code
torch.cuda.is_available()
def get_default_device():
"""Pick GPU if available, else CPU"""
if torch.cuda.is_available():
return torch.device('cuda')
else:
return torch.device('cpu')
device = get_default_device()
device
def to_device(data, device):
"""Move tensor(s) to chosen device"""
if isinstance(data, (list,tuple)):
return [to_device(x, device) for x in data]
return data.to(device, non_blocking=True)
class DeviceDataLoader():
"""Wrap a dataloader to move data to a device"""
def __init__(self, dl, device):
self.dl = dl
self.device = device
def __iter__(self):
"""Yield a batch of data after moving it to device"""
for b in self.dl:
yield to_device(b, self.device)
def __len__(self):
"""Number of batches"""
return len(self.dl)
###Output
_____no_output_____
###Markdown
Let us also define a couple of helper functions for plotting the losses & accuracies.
###Code
def plot_losses(history):
losses = [x['val_loss'] for x in history]
plt.plot(losses, '-x')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.title('Loss vs. No. of epochs');
def plot_accuracies(history):
accuracies = [x['val_acc'] for x in history]
plt.plot(accuracies, '-x')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.title('Accuracy vs. No. of epochs');
###Output
_____no_output_____
###Markdown
Let's move our data loaders to the appropriate device.
###Code
train_loader = DeviceDataLoader(train_loader, device)
val_loader = DeviceDataLoader(val_loader, device)
test_loader = DeviceDataLoader(test_loader, device)
###Output
_____no_output_____
###Markdown
Training the modelWe will make several attempts at training the model. Each time, try a different architecture and a different set of learning rates. Here are some ideas to try:- Increase or decrease the number of hidden layers- Increase of decrease the size of each hidden layer- Try different activation functions- Try training for different number of epochs- Try different learning rates in every epochWhat's the highest validation accuracy you can get to? **Can you get to 50% accuracy? What about 60%?**
###Code
input_size = 3*32*32
output_size = 10
###Output
_____no_output_____
###Markdown
**Q: Extend the `ImageClassificationBase` class to complete the model definition.**Hint: Define the `__init__` and `forward` methods.
###Code
class CIFAR10Model(ImageClassificationBase):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(input_size, 512)
self.linear2 = nn.Linear(512, 256)
self.linear3 = nn.Linear(256, 128)
self.linear4 = nn.Linear(128, output_size)
def forward(self, xb):
# Flatten images into vectors
out = xb.view(xb.size(0), -1)
# Apply layers & activation functions
# linear layer 1
out = self.linear1(out)
# activation layer 1
out = F.relu(out)
# linear layer 2
out = self.linear2(out)
# activation layer 2
out = F.relu(out)
# linear layer 3
out = self.linear3(out)
# activation layer 3
out = F.relu(out)
# linear layer 4
out = self.linear4(out)
return out
###Output
_____no_output_____
###Markdown
You can now instantiate the model, and move it the appropriate device.
###Code
model = to_device(CIFAR10Model(), device)
###Output
_____no_output_____
###Markdown
Before you train the model, it's a good idea to check the validation loss & accuracy with the initial set of weights.
###Code
history = [evaluate(model, val_loader)]
history
###Output
_____no_output_____
###Markdown
**Q: Train the model using the `fit` function to reduce the validation loss & improve accuracy.**Leverage the interactive nature of Jupyter to train the model in multiple phases, adjusting the no. of epochs & learning rate each time based on the result of the previous training phase.
###Code
history += fit(10, 1e-1, model, train_loader, val_loader)
history += fit(10, 1e-2, model, train_loader, val_loader)
history += fit(10, 1e-3, model, train_loader, val_loader)
history += fit(20, 1e-5, model, train_loader, val_loader)
###Output
Epoch [0], val_loss: 1.3716, val_acc: 0.5106
Epoch [1], val_loss: 1.3716, val_acc: 0.5111
Epoch [2], val_loss: 1.3716, val_acc: 0.5111
Epoch [3], val_loss: 1.3716, val_acc: 0.5111
Epoch [4], val_loss: 1.3717, val_acc: 0.5113
Epoch [5], val_loss: 1.3717, val_acc: 0.5113
Epoch [6], val_loss: 1.3717, val_acc: 0.5115
Epoch [7], val_loss: 1.3717, val_acc: 0.5115
Epoch [8], val_loss: 1.3717, val_acc: 0.5119
Epoch [9], val_loss: 1.3717, val_acc: 0.5121
Epoch [10], val_loss: 1.3717, val_acc: 0.5125
Epoch [11], val_loss: 1.3717, val_acc: 0.5127
Epoch [12], val_loss: 1.3717, val_acc: 0.5127
Epoch [13], val_loss: 1.3718, val_acc: 0.5131
Epoch [14], val_loss: 1.3718, val_acc: 0.5131
Epoch [15], val_loss: 1.3718, val_acc: 0.5125
Epoch [16], val_loss: 1.3718, val_acc: 0.5127
Epoch [17], val_loss: 1.3718, val_acc: 0.5125
Epoch [18], val_loss: 1.3718, val_acc: 0.5123
Epoch [19], val_loss: 1.3718, val_acc: 0.5123
###Markdown
Plot the losses and the accuracies to check if you're starting to hit the limits of how well your model can perform on this dataset. You can train some more if you can see the scope for further improvement.
###Code
plot_losses(history)
plot_accuracies(history)
###Output
_____no_output_____
###Markdown
Finally, evaluate the model on the test dataset report its final performance.
###Code
evaluate(model, test_loader)
###Output
_____no_output_____
###Markdown
Are you happy with the accuracy? Record your results by completing the section below, then you can come back and try a different architecture & hyperparameters. Recoding your resultsAs your perform multiple experiments, it's important to record the results in a systematic fashion, so that you can review them later and identify the best approaches that you might want to reproduce or build upon later. **Q: Describe the model's architecture with a short summary.**E.g. `"3 layers (16,32,10)"` (16, 32 and 10 represent output sizes of each layer)
###Code
arch = repr(model)
arch
###Output
_____no_output_____
###Markdown
**Q: Provide the list of learning rates used while training.**
###Code
lrs = [1e-1, 1e-2, 1e-3, 1e-5]
###Output
_____no_output_____
###Markdown
**Q: Provide the list of no. of epochs used while training.**
###Code
epochs = [10, 20]
###Output
_____no_output_____
###Markdown
**Q: What were the final test accuracy & test loss?**
###Code
res = evaluate(model, val_loader)
test_acc = res['val_acc']
test_loss = res['val_loss']
print(res)
###Output
{'val_loss': 1.3717844486236572, 'val_acc': 0.5122932195663452}
###Markdown
Finally, let's save the trained model weights to disk, so we can use this model later.
###Code
torch.save(model.state_dict(), 'cifar10-feedforward.pth')
###Output
_____no_output_____
###Markdown
The `jovian` library provides some utility functions to keep your work organized. With every version of your notebok, you can attach some hyperparameters and metrics from your experiment.
###Code
# Clear previously recorded hyperparams & metrics
jovian.reset()
jovian.log_hyperparams(arch=arch,
lrs=lrs,
epochs=epochs)
jovian.log_metrics(test_loss=test_loss, test_acc=test_acc)
###Output
[jovian] Metrics logged.[0m
###Markdown
Finally, we can commit the notebook to Jovian, attaching the hypeparameters, metrics and the trained model weights.
###Code
jovian.commit(project=project_name, outputs=['cifar10-feedforward.pth'], environment=None)
###Output
_____no_output_____ |
udemy_ml_bootcamp/Python-for-Data-Visualization/Seaborn/Seaborn Exercises .ipynb | ###Markdown
___ ___ Seaborn ExercisesTime to practice your new seaborn skills! Try to recreate the plots below (don't worry about color schemes, just the plot itself. The DataWe will be working with a famous titanic data set for these exercises. Later on in the Machine Learning section of the course, we will revisit this data, and use it to predict survival rates of passengers. For now, we'll just focus on the visualization of the data with seaborn:
###Code
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style('whitegrid')
titanic = sns.load_dataset('titanic')
titanic.head()
titanic.shape
###Output
_____no_output_____
###Markdown
Exercises** Recreate the plots below using the titanic dataframe. There are very few hints since most of the plots can be done with just one or two lines of code and a hint would basically give away the solution. Keep careful attention to the x and y labels for hints.**** *Note! In order to not lose the plot image, make sure you don't code in the cell that is directly above the plot, there is an extra cell above that one which won't overwrite that plot!* **
###Code
sns.jointplot(x='fare', y='age', data=titanic)
sns.distplot(titanic['fare'], kde=False, hist=True)
sns.boxplot(x='class', y='age', data=titanic)
sns.swarmplot(x='class', y='age', data=titanic)
sns.countplot(x='sex', data=titanic)
sns.heatmap(titanic.corr())
import matplotlib.pyplot as plt
fg = sns.FacetGrid(data=titanic ,col='sex')
fg.map(plt.hist, 'age')
###Output
_____no_output_____ |
003 - Machine Learing/LinearRegression_Diabetes.ipynb | ###Markdown
Comparando Real e Predito
###Code
df = pd.DataFrame({'Real': y_valid, 'Predito': predictions}).head(50)
df.plot(kind='bar',figsize=(20,8))
plt.grid(which='major', linestyle='-', linewidth='0.5', color='green')
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.show()
###Output
_____no_output_____
###Markdown
Redução para Visualização
###Code
from sklearn.decomposition import PCA
pca_diabetes = PCA(n_components=2)
principalComponents_diabetes = pca_diabetes.fit_transform(X_valid)
principal_diabetes_Df = pd.DataFrame(data = principalComponents_diabetes
, columns = ['principal component 1', 'principal component 2'])
principal_diabetes_Df['y'] = y_valid
principal_diabetes_Df['predicts'] = predictions
import seaborn as sns
plt.figure(figsize=(16,10))
sns.scatterplot(
x="principal component 1", y="principal component 2",
hue="y",
data=principal_diabetes_Df,
alpha=0.7,
palette="mako"
)
# Plot outputs
plt.figure(figsize=(20,15))
plt.scatter(x="principal component 1", y="y", color="black", data=principal_diabetes_Df)
plt.scatter(x="principal component 1", y="predicts", color="green", data=principal_diabetes_Df)
#plt.xticks(())
#plt.yticks(())
plt.title("Quantitative Measure of Disease Progression")
plt.xlabel('PCA1')
plt.ylabel('Y / Yhat')
plt.legend()
plt.grid()
plt.show()
# Plot outputs
plt.figure(figsize=(20,15))
plt.scatter(x="principal component 2", y="y", color='black', linewidths=3, data=principal_diabetes_Df)
plt.scatter(x="principal component 2", y="predicts", color='blue', data=principal_diabetes_Df)
#plt.xticks(())
#plt.yticks(())
plt.title("Quantitative Measure of Disease Progression")
plt.xlabel('PCA2')
plt.ylabel('Y / Yhat')
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Comparando Real e Predito
###Code
df = pd.DataFrame({'Real': y_valid, 'Predito': predictions}).head(50)
df.plot(kind='bar',figsize=(20,8))
plt.grid(which='major', linestyle='-', linewidth='0.5', color='green')
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.show()
###Output
_____no_output_____
###Markdown
Redução para Visualização
###Code
from sklearn.decomposition import PCA
pca_diabetes = PCA(n_components=2)
principalComponents_diabetes = pca_diabetes.fit_transform(X_valid)
principal_diabetes_Df = pd.DataFrame(data = principalComponents_diabetes
, columns = ['principal component 1', 'principal component 2'])
principal_diabetes_Df['y'] = y_valid
principal_diabetes_Df['predicts'] = predictions
import seaborn as sns
plt.figure(figsize=(16,10))
sns.scatterplot(
x="principal component 1", y="principal component 2",
hue="y",
data=principal_diabetes_Df,
alpha=0.7,
palette="mako"
)
# Plot outputs
plt.figure(figsize=(20,15))
plt.scatter(x="principal component 1", y="y", color="black", data=principal_diabetes_Df)
plt.scatter(x="principal component 1", y="predicts", color="green", data=principal_diabetes_Df)
#plt.xticks(())
#plt.yticks(())
plt.title("Quantitative Measure of Disease Progression")
plt.xlabel('PCA1')
plt.ylabel('Y / Yhat')
plt.legend()
plt.grid()
plt.show()
# Plot outputs
plt.figure(figsize=(20,15))
plt.scatter(x="principal component 2", y="y", color='black', linewidths=3, data=principal_diabetes_Df)
plt.scatter(x="principal component 2", y="predicts", color='blue', data=principal_diabetes_Df)
#plt.xticks(())
#plt.yticks(())
plt.title("Quantitative Measure of Disease Progression")
plt.xlabel('PCA2')
plt.ylabel('Y / Yhat')
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____ |
Manufacturing/automation/artifacts/amlnotebooks/4a Safety Incident Reports Form Recognizer.ipynb | ###Markdown
Azure Form RecognizerAzure Form Recognizer is a cognitive service that uses machine learning technology to identify and extract key-value pairs and table data from form documents. It then outputs structured data that includes the relationships in the original file. ![](https://dreamdemostorageforgen2.blob.core.windows.net/mfgdemodata/Incident_Reports.jpg) Overview*Safety Incident Reports Dataset*: Raw unstructured data is fed into the pipeline in the form of electronically generated PDFs. These reports contain information about injuries that occurred at 5 different factories belonging to a company. This data provides information on injury reports, including the nature, description, date, source and the name of the establishment where it happened. Notebook Organization + Fetch the injury report PDF files from a container under an azure storage account.+ Convert the PDF files to JSON by querying the azure trained form recognizer model using the REST API.+ Preprocess the JSON files to extract only relevant information.+ Push the JSON files to a container under an azure storage account. Importing Relevant Libraries
###Code
# Please install this specific version of azure storage blob compatible with this notebook.
!pip install azure-storage-blob==2.1.0
# Import the required libraries
import json
import time
import requests
import os
from azure.storage.blob import BlockBlobService
import pprint
from os import listdir
from os.path import isfile, join
import shutil
import pickle
###Output
Requirement already satisfied: azure-storage-blob==2.1.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (2.1.0)
Requirement already satisfied: azure-common>=1.1.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-blob==2.1.0) (1.1.25)
Requirement already satisfied: azure-storage-common~=2.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-blob==2.1.0) (2.1.0)
Requirement already satisfied: python-dateutil in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.8.0)
Requirement already satisfied: requests in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.23.0)
Requirement already satisfied: cryptography in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.7)
Requirement already satisfied: six>=1.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from python-dateutil->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (1.14.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from requests->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (1.24.2)
Requirement already satisfied: idna<3,>=2.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from requests->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from requests->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from requests->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2019.11.28)
Requirement already satisfied: cffi!=1.11.3,>=1.8 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from cryptography->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (1.12.3)
Requirement already satisfied: asn1crypto>=0.21.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from cryptography->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (1.0.1)
Requirement already satisfied: pycparser in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from cffi!=1.11.3,>=1.8->cryptography->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.19)
###Markdown
Create Local Folders
###Code
# Create local directories if they don't exist
# *input_forms* contains all the pdf files to be converted to json
if (not os.path.isdir(os.getcwd()+"/input_forms")):
os.makedirs(os.getcwd()+"/input_forms")
# *output_json* will contain all the converted json files
if (not os.path.isdir(os.getcwd()+"/output_json")):
os.makedirs(os.getcwd()+"/output_json")
###Output
_____no_output_____
###Markdown
Downloading the PDF forms from a container in azure storage - Downloads all PDF forms from a container named *incidentreport* to a local folder *input_forms*
###Code
%%time
# Downloading pdf files from a container named *incidentreport* to a local folder *input_forms*
# Importing user defined config
import config
# setting up blob storage configs
STORAGE_ACCOUNT_NAME = config.STORAGE_ACCOUNT_NAME
STORAGE_ACCOUNT_ACCESS_KEY = config.STORAGE_ACCOUNT_ACCESS_KEY
STORAGE_CONTAINER_NAME = "incidentreport"
# Instantiating a blob service object
blob_service = BlockBlobService(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_ACCESS_KEY)
blobs = blob_service.list_blobs(STORAGE_CONTAINER_NAME)
# Downloading pdf files from the container *incidentreport* and storing them locally to *input_forms* folder
for blob in blobs:
# Check if the blob.name is already present in the folder input_forms. If yes then continue
try:
with open('merged_log','rb') as f:
merged_files = pickle.load(f)
except FileNotFoundError:
merged_files = set()
# If file is already processed then continue to next file
if (blob.name in merged_files):
continue
download_file_path = os.path.join(os.getcwd(), "input_forms", blob.name)
blob_service.get_blob_to_path(STORAGE_CONTAINER_NAME, blob.name ,download_file_path)
merged_files.add(blob.name)
# Keep trace of all the processed files at the end of your script (to keep track later)
with open('merged_log', 'wb') as f:
pickle.dump(merged_files, f)
# Total number of forms to be converted to JSON
files = [f for f in listdir(os.getcwd()+"/input_forms") if isfile(join(os.getcwd()+"/input_forms", f))]
###Output
_____no_output_____
###Markdown
Querying the custom trained form recognizer model (PDF -> JSON) - Converts PDF -> JSON by querying the trained custom model.- Preprocess the JSON file and extract only the relevant information.
###Code
%%time
# Importing user defined config
import config
# Endpoint parameters for querying the custom trained form-recognizer model to return the processed JSON
# Processes PDF files one by one and return CLEAN JSON files
endpoint = config.FORM_RECOGNIZER_ENDPOINT
# Change if api key is expired
apim_key = config.FORM_RECOGNIZER_APIM_KEY
# This model is the one trained on 5 forms
model_id =config.FORM_RECOGNIZER_MODEL_ID
post_url = endpoint + "/formrecognizer/v2.0/custom/models/%s/analyze" % model_id
files = [f for f in listdir(os.getcwd()+"/input_forms") if isfile(join(os.getcwd()+"/input_forms", f))]
params = {"includeTextDetails": True}
headers = {'Content-Type': 'application/pdf', 'Ocp-Apim-Subscription-Key': apim_key}
local_path = os.path.join(os.getcwd(), "input_forms//")
output_path = os.path.join(os.getcwd(), "output_json//")
for file in files:
try:
with open('json_log','rb') as l:
json_files = pickle.load(l)
except FileNotFoundError:
json_files = set()
if (file in json_files):
continue
else:
with open(local_path+file, "rb") as f:
data_bytes = f.read()
try:
resp = requests.post(url = post_url, data = data_bytes, headers = headers, params = params)
print('resp',resp)
if resp.status_code != 202:
print("POST analyze failed:\n%s" % json.dumps(resp.json()))
quit()
print("POST analyze succeeded:\n%s" % resp.headers)
get_url = resp.headers["operation-location"]
except Exception as e:
print("POST analyze failed:\n%s" % str(e))
quit()
n_tries = 15
n_try = 0
wait_sec = 5
max_wait_sec = 60
while n_try < n_tries:
try:
resp = requests.get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
resp_json = resp.json()
if resp.status_code != 200:
print("GET analyze results failed:\n%s" % json.dumps(resp_json))
quit()
status = resp_json["status"]
if status == "succeeded":
print("Analysis succeeded:\n%s" % file[:-4])
allkeys = resp_json['analyzeResult']['documentResults'][0]['fields'].keys()
new_dict = {}
for i in allkeys:
if resp_json['analyzeResult']['documentResults'][0]['fields'][i] != None:
key = i.replace(" ", "_")
new_dict[key] = resp_json['analyzeResult']['documentResults'][0]['fields'][i]['valueString']
else:
key = i.replace(" ", "_")
new_dict[key] = None
# Appending form url to json
new_dict['form_url'] = 'https://stcognitivesearch0001.blob.core.windows.net/formupload/' + file
with open(output_path+file[:-4]+".json", 'w') as outfile:
json.dump(new_dict, outfile)
# Change the encoding of file in case of spanish forms. It will detected random characters.
with open(output_path+file[:-4]+".json", 'w', encoding='utf-8') as outfile:
json.dump(new_dict, outfile, ensure_ascii=False)
# Once JSON is saved log it otherwise don't log it.
json_files.add(file)
with open('json_log', 'wb') as f:
pickle.dump(json_files, f)
break
if status == "failed":
print("Analysis failed:\n%s" % json.dumps(resp_json))
quit()
# Analysis still running. Wait and retry.
time.sleep(wait_sec)
n_try += 1
wait_sec = min(2*wait_sec, max_wait_sec)
except Exception as e:
msg = "GET analyze results failed:\n%s" % str(e)
print(msg)
quit()
###Output
_____no_output_____
###Markdown
Upload the JSON files to a cotainer- Upload JSON files from local folder *output_json* to the container *formrecogoutput*
###Code
# Total number of converted JSON
files = [f for f in listdir(os.getcwd()+"/output_json") if isfile(join(os.getcwd()+"/output_json", f))]
%%time
# Importing user defined config
import config
# Connect to the container for uploading the JSON files
# Set up configs for blob storage
STORAGE_ACCOUNT_NAME = config.STORAGE_ACCOUNT_NAME
STORAGE_ACCOUNT_ACCESS_KEY = config.STORAGE_ACCOUNT_ACCESS_KEY
# Upload the JSON files in this container
STORAGE_CONTAINER_NAME = "formrecogoutput"
# Instantiating a blob service object
blob_service = BlockBlobService(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_ACCESS_KEY)
%%time
# Upload JSON files from local folder *output_json* to the container *formrecogoutput*
local_path = os.path.join(os.getcwd(), "output_json")
# print(local_path)
for files in os.listdir(local_path):
# print(os.path.join(local_path,files))
blob_service.create_blob_from_path(STORAGE_CONTAINER_NAME, files, os.path.join(local_path,files))
###Output
_____no_output_____
###Markdown
*DISCLAIMER* By accessing this code, you acknowledge the code is made available for presentation and demonstration purposes only and that the code: (1) is not subject to SOC 1 and SOC 2 compliance audits; (2) is not designed or intended to be a substitute for the professional advice, diagnosis, treatment, or judgment of a certified financial services professional; (3) is not designed, intended or made available as a medical device; and (4) is not designed or intended to be a substitute for professional medical advice, diagnosis, treatment or judgement. Do not use this code to replace, substitute, or provide professional financial advice or judgment, or to replace, substitute or provide medical advice, diagnosis, treatment or judgement. You are solely responsible for ensuring the regulatory, legal, and/or contractual compliance of any use of the code, including obtaining any authorizations or consents, and any solution you choose to build that incorporates this code in whole or in part. Azure Form RecognizerAzure Form Recognizer is a cognitive service that uses machine learning technology to identify and extract key-value pairs and table data from form documents. It then outputs structured data that includes the relationships in the original file. ![](https://dreamdemostorageforgen2.blob.core.windows.net/mfgdemodata/Incident_Reports.jpg) Overview*Safety Incident Reports Dataset*: Raw unstructured data is fed into the pipeline in the form of electronically generated PDFs. These reports contain information about injuries that occurred at 5 different factories belonging to a company. This data provides information on injury reports, including the nature, description, date, source and the name of the establishment where it happened. Notebook Organization + Fetch the injury report PDF files from a container under an azure storage account.+ Convert the PDF files to JSON by querying the azure trained form recognizer model using the REST API.+ Preprocess the JSON files to extract only relevant information.+ Push the JSON files to a container under an azure storage account. Importing Relevant Libraries
###Code
# Please install this specific version of azure storage blob compatible with this notebook.
!pip install azure-storage-blob==2.1.0
# Import the required libraries
import json
import time
import requests
import os
from azure.storage.blob import BlockBlobService
import pprint
from os import listdir
from os.path import isfile, join
import shutil
import pickle
###Output
Requirement already satisfied: azure-storage-blob==2.1.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (2.1.0)
Requirement already satisfied: azure-common>=1.1.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-blob==2.1.0) (1.1.25)
Requirement already satisfied: azure-storage-common~=2.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-blob==2.1.0) (2.1.0)
Requirement already satisfied: python-dateutil in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.8.0)
Requirement already satisfied: requests in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.23.0)
Requirement already satisfied: cryptography in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.7)
Requirement already satisfied: six>=1.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from python-dateutil->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (1.14.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from requests->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (1.24.2)
Requirement already satisfied: idna<3,>=2.5 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from requests->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from requests->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from requests->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2019.11.28)
Requirement already satisfied: cffi!=1.11.3,>=1.8 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from cryptography->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (1.12.3)
Requirement already satisfied: asn1crypto>=0.21.0 in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from cryptography->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (1.0.1)
Requirement already satisfied: pycparser in /anaconda/envs/azureml_py36/lib/python3.6/site-packages (from cffi!=1.11.3,>=1.8->cryptography->azure-storage-common~=2.1->azure-storage-blob==2.1.0) (2.19)
###Markdown
Create Local Folders
###Code
# Create local directories if they don't exist
# *input_forms* contains all the pdf files to be converted to json
if (not os.path.isdir(os.getcwd()+"/input_forms")):
os.makedirs(os.getcwd()+"/input_forms")
# *output_json* will contain all the converted json files
if (not os.path.isdir(os.getcwd()+"/output_json")):
os.makedirs(os.getcwd()+"/output_json")
###Output
_____no_output_____
###Markdown
Downloading the PDF forms from a container in azure storage - Downloads all PDF forms from a container named *incidentreport* to a local folder *input_forms*
###Code
%%time
# Downloading pdf files from a container named *incidentreport* to a local folder *input_forms*
# Importing user defined config
import config
# setting up blob storage configs
STORAGE_ACCOUNT_NAME = config.STORAGE_ACCOUNT_NAME
STORAGE_ACCOUNT_ACCESS_KEY = config.STORAGE_ACCOUNT_ACCESS_KEY
STORAGE_CONTAINER_NAME = "incidentreport"
# Instantiating a blob service object
blob_service = BlockBlobService(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_ACCESS_KEY)
blobs = blob_service.list_blobs(STORAGE_CONTAINER_NAME)
# Downloading pdf files from the container *incidentreport* and storing them locally to *input_forms* folder
for blob in blobs:
# Check if the blob.name is already present in the folder input_forms. If yes then continue
try:
with open('merged_log','rb') as f:
merged_files = pickle.load(f)
except FileNotFoundError:
merged_files = set()
# If file is already processed then continue to next file
if (blob.name in merged_files):
continue
download_file_path = os.path.join(os.getcwd(), "input_forms", blob.name)
blob_service.get_blob_to_path(STORAGE_CONTAINER_NAME, blob.name ,download_file_path)
merged_files.add(blob.name)
# Keep trace of all the processed files at the end of your script (to keep track later)
with open('merged_log', 'wb') as f:
pickle.dump(merged_files, f)
# Total number of forms to be converted to JSON
files = [f for f in listdir(os.getcwd()+"/input_forms") if isfile(join(os.getcwd()+"/input_forms", f))]
###Output
_____no_output_____
###Markdown
Querying the custom trained form recognizer model (PDF -> JSON) - Converts PDF -> JSON by querying the trained custom model.- Preprocess the JSON file and extract only the relevant information.
###Code
%%time
# Importing user defined config
import config
# Endpoint parameters for querying the custom trained form-recognizer model to return the processed JSON
# Processes PDF files one by one and return CLEAN JSON files
endpoint = config.FORM_RECOGNIZER_ENDPOINT
# Change if api key is expired
apim_key = config.FORM_RECOGNIZER_APIM_KEY
# This model is the one trained on 5 forms
model_id =config.FORM_RECOGNIZER_MODEL_ID
post_url = endpoint + "/formrecognizer/v2.0/custom/models/%s/analyze" % model_id
files = [f for f in listdir(os.getcwd()+"/input_forms") if isfile(join(os.getcwd()+"/input_forms", f))]
params = {"includeTextDetails": True}
headers = {'Content-Type': 'application/pdf', 'Ocp-Apim-Subscription-Key': apim_key}
local_path = os.path.join(os.getcwd(), "input_forms//")
output_path = os.path.join(os.getcwd(), "output_json//")
for file in files:
try:
with open('json_log','rb') as l:
json_files = pickle.load(l)
except FileNotFoundError:
json_files = set()
if (file in json_files):
continue
else:
with open(local_path+file, "rb") as f:
data_bytes = f.read()
try:
resp = requests.post(url = post_url, data = data_bytes, headers = headers, params = params)
print('resp',resp)
if resp.status_code != 202:
print("POST analyze failed:\n%s" % json.dumps(resp.json()))
quit()
print("POST analyze succeeded:\n%s" % resp.headers)
get_url = resp.headers["operation-location"]
except Exception as e:
print("POST analyze failed:\n%s" % str(e))
quit()
n_tries = 15
n_try = 0
wait_sec = 5
max_wait_sec = 60
while n_try < n_tries:
try:
resp = requests.get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
resp_json = resp.json()
if resp.status_code != 200:
print("GET analyze results failed:\n%s" % json.dumps(resp_json))
quit()
status = resp_json["status"]
if status == "succeeded":
print("Analysis succeeded:\n%s" % file[:-4])
allkeys = resp_json['analyzeResult']['documentResults'][0]['fields'].keys()
new_dict = {}
for i in allkeys:
if resp_json['analyzeResult']['documentResults'][0]['fields'][i] != None:
key = i.replace(" ", "_")
new_dict[key] = resp_json['analyzeResult']['documentResults'][0]['fields'][i]['valueString']
else:
key = i.replace(" ", "_")
new_dict[key] = None
# Appending form url to json
new_dict['form_url'] = 'https://stcognitivesearch0001.blob.core.windows.net/formupload/' + file
with open(output_path+file[:-4]+".json", 'w') as outfile:
json.dump(new_dict, outfile)
# Change the encoding of file in case of spanish forms. It will detected random characters.
with open(output_path+file[:-4]+".json", 'w', encoding='utf-8') as outfile:
json.dump(new_dict, outfile, ensure_ascii=False)
# Once JSON is saved log it otherwise don't log it.
json_files.add(file)
with open('json_log', 'wb') as f:
pickle.dump(json_files, f)
break
if status == "failed":
print("Analysis failed:\n%s" % json.dumps(resp_json))
quit()
# Analysis still running. Wait and retry.
time.sleep(wait_sec)
n_try += 1
wait_sec = min(2*wait_sec, max_wait_sec)
except Exception as e:
msg = "GET analyze results failed:\n%s" % str(e)
print(msg)
quit()
###Output
_____no_output_____
###Markdown
Upload the JSON files to a cotainer- Upload JSON files from local folder *output_json* to the container *formrecogoutput*
###Code
# Total number of converted JSON
files = [f for f in listdir(os.getcwd()+"/output_json") if isfile(join(os.getcwd()+"/output_json", f))]
%%time
# Importing user defined config
import config
# Connect to the container for uploading the JSON files
# Set up configs for blob storage
STORAGE_ACCOUNT_NAME = config.STORAGE_ACCOUNT_NAME
STORAGE_ACCOUNT_ACCESS_KEY = config.STORAGE_ACCOUNT_ACCESS_KEY
# Upload the JSON files in this container
STORAGE_CONTAINER_NAME = "formrecogoutput"
# Instantiating a blob service object
blob_service = BlockBlobService(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_ACCESS_KEY)
%%time
# Upload JSON files from local folder *output_json* to the container *formrecogoutput*
local_path = os.path.join(os.getcwd(), "output_json")
# print(local_path)
for files in os.listdir(local_path):
# print(os.path.join(local_path,files))
blob_service.create_blob_from_path(STORAGE_CONTAINER_NAME, files, os.path.join(local_path,files))
###Output
_____no_output_____
###Markdown
Azure Form RecognizerAzure Form Recognizer is a cognitive service that uses machine learning technology to identify and extract key-value pairs and table data from form documents. It then outputs structured data that includes the relationships in the original file. ![](https://dreamdemostorageforgen2.blob.core.windows.net/mfgdemodata/Incident_Reports.jpg) Overview*Safety Incident Reports Dataset*: Raw unstructured data is fed into the pipeline in the form of electronically generated PDFs. These reports contain information about injuries that occurred at 5 different factories belonging to a company. This data provides information on injury reports, including the nature, description, date, source and the name of the establishment where it happened. Notebook Organization + Fetch the injury report PDF files from a container under an azure storage account.+ Convert the PDF files to JSON by querying the azure trained form recognizer model using the REST API.+ Preprocess the JSON files to extract only relevant information.+ Push the JSON files to a container under an azure storage account. Importing Relevant Libraries
###Code
# Please install this specific version of azure storage blob compatible with this notebook.
!pip install --quiet azure-storage-blob==2.1.0
# Import the required libraries
import json
import time
import requests
import os
from azure.storage.blob import BlockBlobService
import pprint
from os import listdir
from os.path import isfile, join
import shutil
import pickle
###Output
_____no_output_____
###Markdown
Create Local Folders
###Code
# Create local directories if they don't exist
# *input_forms* contains all the pdf files to be converted to json
if (not os.path.isdir(os.getcwd()+"/input_forms")):
os.makedirs(os.getcwd()+"/input_forms")
# *output_json* will contain all the converted json files
if (not os.path.isdir(os.getcwd()+"/output_json")):
os.makedirs(os.getcwd()+"/output_json")
###Output
_____no_output_____
###Markdown
Downloading the PDF forms from a container in azure storage - Downloads all PDF forms from a container named *incidentreport* to a local folder *input_forms*
###Code
%%time
# Downloading pdf files from a container named *incidentreport* to a local folder *input_forms*
# Importing user defined config
import config
# setting up blob storage configs
STORAGE_ACCOUNT_NAME = config.STORAGE_ACCOUNT_NAME
STORAGE_ACCOUNT_ACCESS_KEY = config.STORAGE_ACCOUNT_ACCESS_KEY
STORAGE_CONTAINER_NAME = "incidentreport"
# Instantiating a blob service object
blob_service = BlockBlobService(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_ACCESS_KEY)
blobs = blob_service.list_blobs(STORAGE_CONTAINER_NAME)
# Downloading pdf files from the container *incidentreport* and storing them locally to *input_forms* folder
for blob in blobs:
if not blob.name.rsplit('.',1)[-1] == 'pdf':
continue
# Check if the blob.name is already present in the folder input_forms. If yes then continue
try:
with open('merged_log','rb') as f:
merged_files = pickle.load(f)
except FileNotFoundError:
merged_files = set()
# If file is already processed then continue to next file
if (blob.name in merged_files):
print(blob.name)
continue
download_file_path = os.path.join(os.getcwd(), "input_forms", blob.name)
blob_service.get_blob_to_path(STORAGE_CONTAINER_NAME, blob.name ,download_file_path)
merged_files.add(blob.name)
# Keep trace of all the processed files at the end of your script (to keep track later)
with open('merged_log', 'wb') as f:
pickle.dump(merged_files, f)
# Total number of forms to be converted to JSON
files = [f for f in listdir(os.getcwd()+"/input_forms") if isfile(join(os.getcwd()+"/input_forms", f))]
files
###Output
_____no_output_____
###Markdown
Querying the custom trained form recognizer model (PDF -> JSON) - Converts PDF -> JSON by querying the trained custom model.- Preprocess the JSON file and extract only the relevant information.
###Code
%%time
# Importing user defined config
import config
# Endpoint parameters for querying the custom trained form-recognizer model to return the processed JSON
# Processes PDF files one by one and return CLEAN JSON files
endpoint = config.FORM_RECOGNIZER_ENDPOINT
print(endpoint)
# Change if api key is expired
apim_key = config.FORM_RECOGNIZER_APIM_KEY
print(apim_key)
# This model is the one trained on 5 forms
model_id =config.FORM_RECOGNIZER_MODEL_ID
print(model_id)
post_url = endpoint + "/formrecognizer/v2.0/custom/models/%s/analyze" % model_id
print(post_url)
files = [f for f in listdir(os.getcwd()+"/input_forms") if isfile(join(os.getcwd()+"/input_forms", f))]
params = {"includeTextDetails": True}
headers = {'Content-Type': 'application/pdf', 'Ocp-Apim-Subscription-Key': apim_key}
local_path = os.path.join(os.getcwd(), "input_forms//")
output_path = os.path.join(os.getcwd(), "output_json//")
for file in files:
try:
with open('json_log','rb') as l:
json_files = pickle.load(l)
except FileNotFoundError:
json_files = set()
if (file in json_files):
continue
else:
with open(local_path+file, "rb") as f:
data_bytes = f.read()
try:
resp = requests.post(url = post_url, data = data_bytes, headers = headers, params = params)
print('resp',resp)
if resp.status_code != 202:
print("POST analyze failed:\n%s" % json.dumps(resp.json()))
# quit()
break
else:
print("POST analyze succeeded:\n%s" % resp.headers)
get_url = resp.headers["operation-location"]
except Exception as e:
print("POST analyze failed:\n%s" % str(e))
# quit()
break
n_tries = 15
n_try = 0
wait_sec = 5
max_wait_sec = 60
while n_try < n_tries:
try:
resp = requests.get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
resp_json = resp.json()
if resp.status_code != 200:
print("GET analyze results failed:\n%s" % json.dumps(resp_json))
quit()
status = resp_json["status"]
if status == "succeeded":
print("Analysis succeeded:\n%s" % file[:-4])
allkeys = resp_json['analyzeResult']['documentResults'][0]['fields'].keys()
new_dict = {}
for i in allkeys:
if resp_json['analyzeResult']['documentResults'][0]['fields'][i] != None:
key = i.replace(" ", "_")
new_dict[key] = resp_json['analyzeResult']['documentResults'][0]['fields'][i]['valueString']
else:
key = i.replace(" ", "_")
new_dict[key] = None
# Appending form url to json
new_dict['form_url'] = 'https://stcognitivesearch001.blob.core.windows.net/formupload/' + file
with open(output_path+file[:-4]+".json", 'w') as outfile:
json.dump(new_dict, outfile)
# Change the encoding of file in case of spanish forms. It will detected random characters.
with open(output_path+file[:-4]+".json", 'w', encoding='utf-8') as outfile:
json.dump(new_dict, outfile, ensure_ascii=False)
# Once JSON is saved log it otherwise don't log it.
json_files.add(file)
with open('json_log', 'wb') as f:
pickle.dump(json_files, f)
break
if status == "failed":
print("Analysis failed:\n%s" % json.dumps(resp_json))
# quit()
# Analysis still running. Wait and retry.
time.sleep(wait_sec)
n_try += 1
wait_sec = min(2*wait_sec, max_wait_sec)
except Exception as e:
msg = "GET analyze results failed:\n%s" % str(e)
print(msg)
# quit()
break
###Output
https://westus2.api.cognitive.microsoft.com
8535af565d934195b54dee076dd0aa3d
d35340be-1522-46cb-b72a-ead573d0475b
https://westus2.api.cognitive.microsoft.com/formrecognizer/v2.0/custom/models/d35340be-1522-46cb-b72a-ead573d0475b/analyze
resp <Response [202]>
POST analyze succeeded:
{'Content-Length': '0', 'Operation-Location': 'https://westus2.api.cognitive.microsoft.com/formrecognizer/v2.0/custom/models/d35340be-1522-46cb-b72a-ead573d0475b/analyzeresults/0256fc12-b004-4a84-ad2e-08841fdcbd4f', 'x-envoy-upstream-service-time': '69', 'apim-request-id': 'ed60a913-8f6f-4e87-b221-8cdbffe7d24b', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'x-content-type-options': 'nosniff', 'Date': 'Wed, 23 Sep 2020 07:34:11 GMT'}
Analysis succeeded:
202045000
resp <Response [202]>
POST analyze succeeded:
{'Content-Length': '0', 'Operation-Location': 'https://westus2.api.cognitive.microsoft.com/formrecognizer/v2.0/custom/models/d35340be-1522-46cb-b72a-ead573d0475b/analyzeresults/41b8a0dc-b801-45e4-9fd8-1cf9e17885ad', 'x-envoy-upstream-service-time': '62', 'apim-request-id': 'eb5fa97e-e95a-49e5-bbfb-62a1a0d703e8', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'x-content-type-options': 'nosniff', 'Date': 'Wed, 23 Sep 2020 07:34:26 GMT'}
Analysis succeeded:
202045001
resp <Response [202]>
POST analyze succeeded:
{'Content-Length': '0', 'Operation-Location': 'https://westus2.api.cognitive.microsoft.com/formrecognizer/v2.0/custom/models/d35340be-1522-46cb-b72a-ead573d0475b/analyzeresults/db61bf1b-7f20-49e1-a75b-470237f126d6', 'x-envoy-upstream-service-time': '87', 'apim-request-id': 'fe7ed8a2-e5ab-4488-8350-7788e601ea51', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'x-content-type-options': 'nosniff', 'Date': 'Wed, 23 Sep 2020 07:34:42 GMT'}
Analysis succeeded:
202045005
resp <Response [202]>
POST analyze succeeded:
{'Content-Length': '0', 'Operation-Location': 'https://westus2.api.cognitive.microsoft.com/formrecognizer/v2.0/custom/models/d35340be-1522-46cb-b72a-ead573d0475b/analyzeresults/317a7d02-0fc5-4ba3-a073-548b3428b9d5', 'x-envoy-upstream-service-time': '57', 'apim-request-id': '13fda5e9-71ea-47f6-bc28-a9005e2fcc3b', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'x-content-type-options': 'nosniff', 'Date': 'Wed, 23 Sep 2020 07:34:57 GMT'}
Analysis succeeded:
202045016
resp <Response [202]>
POST analyze succeeded:
{'Content-Length': '0', 'Operation-Location': 'https://westus2.api.cognitive.microsoft.com/formrecognizer/v2.0/custom/models/d35340be-1522-46cb-b72a-ead573d0475b/analyzeresults/0e64d489-b2fe-42d2-a41a-5b0a0a429f6e', 'x-envoy-upstream-service-time': '83', 'apim-request-id': '3172c2d8-065b-4e0a-b8be-d3a9dcd59b31', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'x-content-type-options': 'nosniff', 'Date': 'Wed, 23 Sep 2020 07:35:12 GMT'}
Analysis succeeded:
202045020
CPU times: user 252 ms, sys: 35.6 ms, total: 287 ms
Wall time: 1min 17s
###Markdown
Upload the JSON files to a cotainer- Upload JSON files from local folder *output_json* to the container *formrecogoutput*
###Code
# Total number of converted JSON
files = [f for f in listdir(os.getcwd()+"/output_json") if isfile(join(os.getcwd()+"/output_json", f))]
len(files)
%%time
# Importing user defined config
import config
# Connect to the container for uploading the JSON files
# Set up configs for blob storage
STORAGE_ACCOUNT_NAME = config.STORAGE_ACCOUNT_NAME
STORAGE_ACCOUNT_ACCESS_KEY = config.STORAGE_ACCOUNT_ACCESS_KEY
# Upload the JSON files in this container
STORAGE_CONTAINER_NAME = "formrecogoutput"
# Instantiating a blob service object
blob_service = BlockBlobService(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_ACCESS_KEY)
%%time
# Upload JSON files from local folder *output_json* to the container *formrecogoutput*
local_path = os.path.join(os.getcwd(), "output_json")
# print(local_path)
for files in os.listdir(local_path):
# print(os.path.join(local_path,files))
blob_service.create_blob_from_path(STORAGE_CONTAINER_NAME, files, os.path.join(local_path,files))
###Output
CPU times: user 19.6 ms, sys: 1.32 ms, total: 21 ms
Wall time: 416 ms
|
adding_layers/ecg_train_2conv_10run.ipynb | ###Markdown
1D-CNN Model for ECG Classification- The model used has 2 Conv. layers and 2 FC layers.- This code repeat running the training process and produce all kinds of data which can be given, such as data needed for drawing loss and accuracy graph through epochs, and maximum test accuracy for each run. Get permission of Google Drive access
###Code
from google.colab import drive
drive.mount('/content/gdrive')
root_path = 'gdrive/My Drive/Colab Notebooks'
###Output
Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount("/content/gdrive", force_remount=True).
###Markdown
File name settings
###Code
data_dir = 'mitdb'
train_name = 'train_ecg.hdf5'
test_name = 'test_ecg.hdf5'
all_name = 'all_ecg.hdf5'
model_dir = 'model'
model_name = 'conv2'
model_ext = '.pth'
csv_dir = 'csv'
csv_ext = '.csv'
csv_name = 'conv2'
csv_accs_name = 'accs_conv2'
###Output
_____no_output_____
###Markdown
Import required packages
###Code
import os
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from torch.optim import Adam
import numpy as np
import pandas as pd
import h5py
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
GPU settings
###Code
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
if torch.cuda.is_available():
print(torch.cuda.get_device_name(0))
###Output
Tesla K80
###Markdown
Define `ECG` `Dataset` class
###Code
class ECG(Dataset):
def __init__(self, mode='train'):
if mode == 'train':
with h5py.File(os.path.join(root_path, data_dir, train_name), 'r') as hdf:
self.x = hdf['x_train'][:]
self.y = hdf['y_train'][:]
elif mode == 'test':
with h5py.File(os.path.join(root_path, data_dir, test_name), 'r') as hdf:
self.x = hdf['x_test'][:]
self.y = hdf['y_test'][:]
elif mode == 'all':
with h5py.File(os.path.join(root_path, data_dir, all_name), 'r') as hdf:
self.x = hdf['x'][:]
self.y = hdf['y'][:]
else:
raise ValueError('Argument of mode should be train, test, or all.')
def __len__(self):
return len(self.x)
def __getitem__(self, idx):
return torch.tensor(self.x[idx], dtype=torch.float), torch.tensor(self.y[idx])
###Output
_____no_output_____
###Markdown
Make Batch Generator Batch sizeYou can change it if you want.
###Code
batch_size = 32
###Output
_____no_output_____
###Markdown
`DataLoader` for batch generating`torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False)`
###Code
train_dataset = ECG(mode='train')
test_dataset = ECG(mode='test')
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Size check for single batch
###Code
x_train, y_train = next(iter(train_loader))
print(x_train.size())
print(y_train.size())
###Output
torch.Size([32, 1, 128])
torch.Size([32])
###Markdown
Number of total batches
###Code
total_batch = len(train_loader)
print(total_batch)
###Output
414
###Markdown
Pytorch layer modules for **Conv1D** Network `Conv1d` layer- `torch.nn.Conv1d(in_channels, out_channels, kernel_size)` `MaxPool1d` layer- `torch.nn.MaxPool1d(kernel_size, stride=None)`- Parameter `stride` follows `kernel_size`. `ReLU` layer- `torch.nn.ReLU()` `Linear` layer- `torch.nn.Linear(in_features, out_features, bias=True)` `Softmax` layer- `torch.nn.Softmax(dim=None)`- Parameter `dim` is usually set to `1`. Construct 1D CNN ECG classification model
###Code
class ECGConv1D(nn.Module):
def __init__(self):
super(ECGConv1D, self).__init__()
self.conv1 = nn.Conv1d(1, 16, 7, padding=3) # 128 x 16
self.relu1 = nn.LeakyReLU()
self.pool1 = nn.MaxPool1d(2) # 64 x 16
self.conv2 = nn.Conv1d(16, 16, 5, padding=2) # 64 x 16
self.relu2 = nn.LeakyReLU()
self.pool2 = nn.MaxPool1d(2) # 32 x 16
self.linear3 = nn.Linear(32 * 16, 128)
self.relu3 = nn.LeakyReLU()
self.linear4 = nn.Linear(128, 5)
self.softmax4 = nn.Softmax(dim=1)
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
x = self.pool1(x)
x = self.conv2(x)
x = self.relu2(x)
x = self.pool2(x)
x = x.view(-1, 32 * 16)
x = self.linear3(x)
x = self.relu3(x)
x = self.linear4(x)
x = self.softmax4(x)
return x
ecgnet = ECGConv1D()
ecgnet.to(device)
###Output
_____no_output_____
###Markdown
Training process settings
###Code
run = 10
epoch = 400
lr = 0.001
###Output
_____no_output_____
###Markdown
Traning function
###Code
def train(nrun, model):
criterion = nn.CrossEntropyLoss()
optimizer = Adam(model.parameters(), lr=lr)
train_losses = list()
train_accs = list()
test_losses = list()
test_accs = list()
best_test_acc = 0 # best test accuracy
for e in range(epoch):
print("Epoch {} - ".format(e+1), end='')
# train
train_loss = 0.0
correct, total = 0, 0
for _, batch in enumerate(train_loader):
x, label = batch # get feature and label from a batch
x, label = x.to(device), label.to(device) # send to device
optimizer.zero_grad() # init all grads to zero
output = model(x) # forward propagation
loss = criterion(output, label) # calculate loss
loss.backward() # backward propagation
optimizer.step() # weight update
train_loss += loss.item()
correct += torch.sum(output.argmax(dim=1) == label).item()
total += len(label)
train_losses.append(train_loss / len(train_loader))
train_accs.append(correct / total)
print("loss: {:.4f}, acc: {:.2f}%".format(train_losses[-1], train_accs[-1]*100), end=' / ')
# test
with torch.no_grad():
test_loss = 0.0
correct, total = 0, 0
for _, batch in enumerate(test_loader):
x, label = batch
x, label = x.to(device), label.to(device)
output = model(x)
loss = criterion(output, label)
test_loss += loss.item()
correct += torch.sum(output.argmax(dim=1) == label).item()
total += len(label)
test_losses.append(test_loss / len(test_loader))
test_accs.append(correct / total)
print("test_loss: {:.4f}, test_acc: {:.2f}%".format(test_losses[-1], test_accs[-1]*100))
# save model that has best validation accuracy
if test_accs[-1] > best_test_acc:
best_test_acc = test_accs[-1]
torch.save(model.state_dict(), os.path.join(root_path, model_dir, '_'.join([model_name, str(nrun), 'best']) + model_ext))
# save model for each 10 epochs
if (e + 1) % 10 == 0:
torch.save(model.state_dict(), os.path.join(root_path, model_dir, '_'.join([model_name, str(nrun), str(e+1)]) + model_ext))
return train_losses, train_accs, test_losses, test_accs
###Output
_____no_output_____
###Markdown
Training process Repeat for 10 times
###Code
best_test_accs = list()
for i in range(run):
print('Run', i+1)
ecgnet = ECGConv1D() # init new model
train_losses, train_accs, test_losses, test_accs = train(i, ecgnet.to(device)) # train
best_test_accs.append(max(test_accs)) # get best test accuracy
best_test_acc_epoch = np.array(test_accs).argmax() + 1
print('Best test accuracy {:.2f}% in epoch {}.'.format(best_test_accs[-1]*100, best_test_acc_epoch))
print('-' * 100)
df = pd.DataFrame({ # save model training process into csv file
'loss': train_losses,
'test_loss': test_losses,
'acc': train_accs,
'test_acc': test_accs
})
df.to_csv(os.path.join(root_path, csv_dir, '_'.join([csv_name, str(i+1)]) + csv_ext))
df = pd.DataFrame({'best_test_acc': best_test_accs}) # save best test accuracy of each run
df.to_csv(os.path.join(root_path, csv_dir, csv_accs_name + csv_ext))
###Output
Run 1
Epoch 1 - loss: 1.3349, acc: 57.54% / test_loss: 1.1609, test_acc: 76.32%
Epoch 2 - loss: 1.1277, acc: 79.02% / test_loss: 1.0856, test_acc: 82.60%
Epoch 3 - loss: 1.0807, acc: 82.95% / test_loss: 1.0603, test_acc: 84.69%
Epoch 4 - loss: 1.0573, acc: 85.03% / test_loss: 1.0605, test_acc: 84.92%
Epoch 5 - loss: 1.0396, acc: 86.81% / test_loss: 1.0050, test_acc: 90.91%
Epoch 6 - loss: 0.9978, acc: 91.19% / test_loss: 0.9859, test_acc: 92.15%
Epoch 7 - loss: 0.9887, acc: 92.00% / test_loss: 0.9822, test_acc: 92.29%
Epoch 8 - loss: 0.9856, acc: 92.06% / test_loss: 0.9805, test_acc: 92.56%
Epoch 9 - loss: 0.9816, acc: 92.52% / test_loss: 0.9742, test_acc: 93.15%
Epoch 10 - loss: 0.9796, acc: 92.71% / test_loss: 0.9762, test_acc: 92.99%
Epoch 11 - loss: 0.9788, acc: 92.80% / test_loss: 0.9732, test_acc: 93.26%
Epoch 12 - loss: 0.9749, acc: 93.16% / test_loss: 0.9693, test_acc: 93.70%
Epoch 13 - loss: 0.9739, acc: 93.22% / test_loss: 0.9701, test_acc: 93.49%
Epoch 14 - loss: 0.9740, acc: 93.18% / test_loss: 0.9706, test_acc: 93.47%
Epoch 15 - loss: 0.9734, acc: 93.17% / test_loss: 0.9668, test_acc: 93.95%
Epoch 16 - loss: 0.9722, acc: 93.34% / test_loss: 0.9653, test_acc: 94.04%
Epoch 17 - loss: 0.9707, acc: 93.51% / test_loss: 0.9650, test_acc: 94.09%
Epoch 18 - loss: 0.9720, acc: 93.39% / test_loss: 0.9645, test_acc: 94.13%
Epoch 19 - loss: 0.9678, acc: 93.74% / test_loss: 0.9637, test_acc: 94.20%
Epoch 20 - loss: 0.9674, acc: 93.79% / test_loss: 0.9738, test_acc: 93.16%
Epoch 21 - loss: 0.9671, acc: 93.81% / test_loss: 0.9650, test_acc: 94.09%
Epoch 22 - loss: 0.9636, acc: 94.16% / test_loss: 0.9607, test_acc: 94.47%
Epoch 23 - loss: 0.9631, acc: 94.22% / test_loss: 0.9600, test_acc: 94.56%
Epoch 24 - loss: 0.9634, acc: 94.19% / test_loss: 0.9593, test_acc: 94.63%
Epoch 25 - loss: 0.9623, acc: 94.31% / test_loss: 0.9600, test_acc: 94.59%
Epoch 26 - loss: 0.9604, acc: 94.52% / test_loss: 0.9573, test_acc: 94.81%
Epoch 27 - loss: 0.9613, acc: 94.41% / test_loss: 0.9588, test_acc: 94.65%
Epoch 28 - loss: 0.9622, acc: 94.32% / test_loss: 0.9587, test_acc: 94.63%
Epoch 29 - loss: 0.9603, acc: 94.47% / test_loss: 0.9572, test_acc: 94.80%
Epoch 30 - loss: 0.9598, acc: 94.53% / test_loss: 0.9569, test_acc: 94.81%
Epoch 31 - loss: 0.9607, acc: 94.44% / test_loss: 0.9599, test_acc: 94.56%
Epoch 32 - loss: 0.9595, acc: 94.59% / test_loss: 0.9574, test_acc: 94.71%
Epoch 33 - loss: 0.9609, acc: 94.39% / test_loss: 0.9576, test_acc: 94.77%
Epoch 34 - loss: 0.9585, acc: 94.69% / test_loss: 0.9578, test_acc: 94.75%
Epoch 35 - loss: 0.9614, acc: 94.38% / test_loss: 0.9566, test_acc: 94.84%
Epoch 36 - loss: 0.9594, acc: 94.56% / test_loss: 0.9594, test_acc: 94.59%
Epoch 37 - loss: 0.9588, acc: 94.63% / test_loss: 0.9566, test_acc: 94.84%
Epoch 38 - loss: 0.9590, acc: 94.62% / test_loss: 0.9569, test_acc: 94.79%
Epoch 39 - loss: 0.9593, acc: 94.58% / test_loss: 0.9584, test_acc: 94.70%
Epoch 40 - loss: 0.9584, acc: 94.65% / test_loss: 0.9570, test_acc: 94.81%
Epoch 41 - loss: 0.9577, acc: 94.71% / test_loss: 0.9637, test_acc: 94.11%
Epoch 42 - loss: 0.9585, acc: 94.68% / test_loss: 0.9560, test_acc: 94.86%
Epoch 43 - loss: 0.9580, acc: 94.72% / test_loss: 0.9621, test_acc: 94.37%
Epoch 44 - loss: 0.9581, acc: 94.65% / test_loss: 0.9549, test_acc: 94.99%
Epoch 45 - loss: 0.9579, acc: 94.74% / test_loss: 0.9558, test_acc: 94.91%
Epoch 46 - loss: 0.9579, acc: 94.73% / test_loss: 0.9584, test_acc: 94.64%
Epoch 47 - loss: 0.9577, acc: 94.72% / test_loss: 0.9567, test_acc: 94.82%
Epoch 48 - loss: 0.9577, acc: 94.72% / test_loss: 0.9557, test_acc: 94.94%
Epoch 49 - loss: 0.9564, acc: 94.84% / test_loss: 0.9557, test_acc: 94.91%
Epoch 50 - loss: 0.9564, acc: 94.84% / test_loss: 0.9548, test_acc: 95.02%
Epoch 51 - loss: 0.9574, acc: 94.77% / test_loss: 0.9568, test_acc: 94.83%
Epoch 52 - loss: 0.9581, acc: 94.68% / test_loss: 0.9570, test_acc: 94.80%
Epoch 53 - loss: 0.9564, acc: 94.86% / test_loss: 0.9523, test_acc: 95.29%
Epoch 54 - loss: 0.9544, acc: 95.05% / test_loss: 0.9532, test_acc: 95.21%
Epoch 55 - loss: 0.9543, acc: 95.05% / test_loss: 0.9566, test_acc: 94.84%
Epoch 56 - loss: 0.9539, acc: 95.10% / test_loss: 0.9544, test_acc: 95.05%
Epoch 57 - loss: 0.9550, acc: 94.99% / test_loss: 0.9523, test_acc: 95.24%
Epoch 58 - loss: 0.9548, acc: 95.03% / test_loss: 0.9528, test_acc: 95.25%
Epoch 59 - loss: 0.9537, acc: 95.13% / test_loss: 0.9517, test_acc: 95.32%
Epoch 60 - loss: 0.9532, acc: 95.16% / test_loss: 0.9518, test_acc: 95.32%
Epoch 61 - loss: 0.9532, acc: 95.18% / test_loss: 0.9514, test_acc: 95.36%
Epoch 62 - loss: 0.9539, acc: 95.11% / test_loss: 0.9523, test_acc: 95.26%
Epoch 63 - loss: 0.9537, acc: 95.11% / test_loss: 0.9540, test_acc: 95.08%
Epoch 64 - loss: 0.9538, acc: 95.14% / test_loss: 0.9546, test_acc: 95.02%
Epoch 65 - loss: 0.9540, acc: 95.10% / test_loss: 0.9528, test_acc: 95.20%
Epoch 66 - loss: 0.9534, acc: 95.15% / test_loss: 0.9541, test_acc: 95.06%
Epoch 67 - loss: 0.9533, acc: 95.18% / test_loss: 0.9518, test_acc: 95.31%
Epoch 68 - loss: 0.9524, acc: 95.25% / test_loss: 0.9530, test_acc: 95.15%
Epoch 69 - loss: 0.9525, acc: 95.24% / test_loss: 0.9528, test_acc: 95.23%
Epoch 70 - loss: 0.9526, acc: 95.21% / test_loss: 0.9553, test_acc: 94.96%
Epoch 71 - loss: 0.9553, acc: 94.95% / test_loss: 0.9568, test_acc: 94.75%
Epoch 72 - loss: 0.9527, acc: 95.24% / test_loss: 0.9510, test_acc: 95.39%
Epoch 73 - loss: 0.9520, acc: 95.30% / test_loss: 0.9510, test_acc: 95.37%
Epoch 74 - loss: 0.9524, acc: 95.25% / test_loss: 0.9531, test_acc: 95.17%
Epoch 75 - loss: 0.9519, acc: 95.30% / test_loss: 0.9585, test_acc: 94.54%
Epoch 76 - loss: 0.9521, acc: 95.27% / test_loss: 0.9526, test_acc: 95.22%
Epoch 77 - loss: 0.9524, acc: 95.24% / test_loss: 0.9534, test_acc: 95.15%
Epoch 78 - loss: 0.9516, acc: 95.33% / test_loss: 0.9513, test_acc: 95.34%
Epoch 79 - loss: 0.9526, acc: 95.22% / test_loss: 0.9513, test_acc: 95.37%
Epoch 80 - loss: 0.9507, acc: 95.39% / test_loss: 0.9502, test_acc: 95.45%
Epoch 81 - loss: 0.9503, acc: 95.45% / test_loss: 0.9509, test_acc: 95.36%
Epoch 82 - loss: 0.9509, acc: 95.36% / test_loss: 0.9501, test_acc: 95.46%
Epoch 83 - loss: 0.9501, acc: 95.49% / test_loss: 0.9508, test_acc: 95.43%
Epoch 84 - loss: 0.9511, acc: 95.39% / test_loss: 0.9514, test_acc: 95.33%
Epoch 85 - loss: 0.9512, acc: 95.36% / test_loss: 0.9498, test_acc: 95.52%
Epoch 86 - loss: 0.9495, acc: 95.54% / test_loss: 0.9504, test_acc: 95.44%
Epoch 87 - loss: 0.9500, acc: 95.49% / test_loss: 0.9502, test_acc: 95.49%
Epoch 88 - loss: 0.9493, acc: 95.55% / test_loss: 0.9497, test_acc: 95.48%
Epoch 89 - loss: 0.9507, acc: 95.42% / test_loss: 0.9525, test_acc: 95.24%
Epoch 90 - loss: 0.9514, acc: 95.36% / test_loss: 0.9505, test_acc: 95.42%
Epoch 91 - loss: 0.9499, acc: 95.51% / test_loss: 0.9544, test_acc: 95.08%
Epoch 92 - loss: 0.9498, acc: 95.52% / test_loss: 0.9518, test_acc: 95.28%
Epoch 93 - loss: 0.9499, acc: 95.49% / test_loss: 0.9573, test_acc: 94.68%
Epoch 94 - loss: 0.9498, acc: 95.51% / test_loss: 0.9514, test_acc: 95.36%
Epoch 95 - loss: 0.9505, acc: 95.43% / test_loss: 0.9506, test_acc: 95.44%
Epoch 96 - loss: 0.9500, acc: 95.47% / test_loss: 0.9522, test_acc: 95.26%
Epoch 97 - loss: 0.9498, acc: 95.51% / test_loss: 0.9540, test_acc: 95.06%
Epoch 98 - loss: 0.9495, acc: 95.55% / test_loss: 0.9493, test_acc: 95.56%
Epoch 99 - loss: 0.9496, acc: 95.54% / test_loss: 0.9522, test_acc: 95.26%
Epoch 100 - loss: 0.9488, acc: 95.61% / test_loss: 0.9498, test_acc: 95.49%
Epoch 101 - loss: 0.9488, acc: 95.63% / test_loss: 0.9492, test_acc: 95.58%
Epoch 102 - loss: 0.9510, acc: 95.39% / test_loss: 0.9494, test_acc: 95.57%
Epoch 103 - loss: 0.9491, acc: 95.58% / test_loss: 0.9520, test_acc: 95.30%
Epoch 104 - loss: 0.9508, acc: 95.39% / test_loss: 0.9509, test_acc: 95.38%
Epoch 105 - loss: 0.9502, acc: 95.48% / test_loss: 0.9506, test_acc: 95.41%
Epoch 106 - loss: 0.9501, acc: 95.49% / test_loss: 0.9523, test_acc: 95.24%
Epoch 107 - loss: 0.9486, acc: 95.63% / test_loss: 0.9497, test_acc: 95.52%
Epoch 108 - loss: 0.9474, acc: 95.72% / test_loss: 0.9496, test_acc: 95.50%
Epoch 109 - loss: 0.9460, acc: 95.88% / test_loss: 0.9491, test_acc: 95.55%
Epoch 110 - loss: 0.9469, acc: 95.79% / test_loss: 0.9470, test_acc: 95.75%
Epoch 111 - loss: 0.9466, acc: 95.82% / test_loss: 0.9483, test_acc: 95.63%
Epoch 112 - loss: 0.9462, acc: 95.87% / test_loss: 0.9481, test_acc: 95.65%
Epoch 113 - loss: 0.9483, acc: 95.65% / test_loss: 0.9479, test_acc: 95.69%
Epoch 114 - loss: 0.9446, acc: 96.03% / test_loss: 0.9471, test_acc: 95.77%
Epoch 115 - loss: 0.9448, acc: 96.01% / test_loss: 0.9478, test_acc: 95.70%
Epoch 116 - loss: 0.9436, acc: 96.10% / test_loss: 0.9458, test_acc: 95.87%
Epoch 117 - loss: 0.9433, acc: 96.14% / test_loss: 0.9437, test_acc: 96.10%
Epoch 118 - loss: 0.9425, acc: 96.22% / test_loss: 0.9411, test_acc: 96.35%
Epoch 119 - loss: 0.9387, acc: 96.59% / test_loss: 0.9387, test_acc: 96.58%
Epoch 120 - loss: 0.9363, acc: 96.83% / test_loss: 0.9375, test_acc: 96.74%
Epoch 121 - loss: 0.9325, acc: 97.24% / test_loss: 0.9380, test_acc: 96.68%
Epoch 122 - loss: 0.9330, acc: 97.20% / test_loss: 0.9379, test_acc: 96.67%
Epoch 123 - loss: 0.9334, acc: 97.13% / test_loss: 0.9397, test_acc: 96.50%
Epoch 124 - loss: 0.9313, acc: 97.37% / test_loss: 0.9396, test_acc: 96.49%
Epoch 125 - loss: 0.9321, acc: 97.22% / test_loss: 0.9355, test_acc: 96.93%
Epoch 126 - loss: 0.9313, acc: 97.33% / test_loss: 0.9365, test_acc: 96.82%
Epoch 127 - loss: 0.9307, acc: 97.42% / test_loss: 0.9346, test_acc: 97.04%
Epoch 128 - loss: 0.9304, acc: 97.42% / test_loss: 0.9346, test_acc: 97.01%
Epoch 129 - loss: 0.9290, acc: 97.61% / test_loss: 0.9360, test_acc: 96.92%
Epoch 130 - loss: 0.9305, acc: 97.43% / test_loss: 0.9399, test_acc: 96.47%
Epoch 131 - loss: 0.9312, acc: 97.31% / test_loss: 0.9340, test_acc: 97.09%
Epoch 132 - loss: 0.9293, acc: 97.55% / test_loss: 0.9339, test_acc: 97.09%
Epoch 133 - loss: 0.9284, acc: 97.63% / test_loss: 0.9345, test_acc: 96.96%
Epoch 134 - loss: 0.9299, acc: 97.47% / test_loss: 0.9340, test_acc: 97.07%
Epoch 135 - loss: 0.9279, acc: 97.73% / test_loss: 0.9351, test_acc: 96.96%
Epoch 136 - loss: 0.9279, acc: 97.73% / test_loss: 0.9358, test_acc: 96.90%
Epoch 137 - loss: 0.9283, acc: 97.67% / test_loss: 0.9466, test_acc: 95.78%
Epoch 138 - loss: 0.9271, acc: 97.80% / test_loss: 0.9371, test_acc: 96.75%
Epoch 139 - loss: 0.9276, acc: 97.71% / test_loss: 0.9325, test_acc: 97.19%
Epoch 140 - loss: 0.9272, acc: 97.76% / test_loss: 0.9370, test_acc: 96.80%
Epoch 141 - loss: 0.9269, acc: 97.82% / test_loss: 0.9355, test_acc: 96.90%
Epoch 142 - loss: 0.9266, acc: 97.80% / test_loss: 0.9308, test_acc: 97.40%
Epoch 143 - loss: 0.9253, acc: 97.97% / test_loss: 0.9317, test_acc: 97.27%
Epoch 144 - loss: 0.9252, acc: 97.99% / test_loss: 0.9296, test_acc: 97.54%
Epoch 145 - loss: 0.9242, acc: 98.07% / test_loss: 0.9299, test_acc: 97.46%
Epoch 146 - loss: 0.9251, acc: 97.98% / test_loss: 0.9315, test_acc: 97.32%
Epoch 147 - loss: 0.9257, acc: 97.93% / test_loss: 0.9308, test_acc: 97.37%
Epoch 148 - loss: 0.9259, acc: 97.87% / test_loss: 0.9301, test_acc: 97.46%
Epoch 149 - loss: 0.9250, acc: 97.98% / test_loss: 0.9308, test_acc: 97.40%
Epoch 150 - loss: 0.9258, acc: 97.94% / test_loss: 0.9307, test_acc: 97.38%
Epoch 151 - loss: 0.9242, acc: 98.06% / test_loss: 0.9306, test_acc: 97.42%
Epoch 152 - loss: 0.9248, acc: 97.98% / test_loss: 0.9310, test_acc: 97.31%
Epoch 153 - loss: 0.9253, acc: 97.92% / test_loss: 0.9296, test_acc: 97.47%
Epoch 154 - loss: 0.9246, acc: 98.07% / test_loss: 0.9299, test_acc: 97.46%
Epoch 155 - loss: 0.9239, acc: 98.10% / test_loss: 0.9308, test_acc: 97.36%
Epoch 156 - loss: 0.9235, acc: 98.16% / test_loss: 0.9289, test_acc: 97.59%
Epoch 157 - loss: 0.9242, acc: 98.05% / test_loss: 0.9292, test_acc: 97.58%
Epoch 158 - loss: 0.9197, acc: 98.52% / test_loss: 0.9259, test_acc: 97.89%
Epoch 159 - loss: 0.9222, acc: 98.22% / test_loss: 0.9290, test_acc: 97.55%
Epoch 160 - loss: 0.9192, acc: 98.57% / test_loss: 0.9251, test_acc: 97.96%
Epoch 161 - loss: 0.9191, acc: 98.54% / test_loss: 0.9253, test_acc: 97.95%
Epoch 162 - loss: 0.9194, acc: 98.53% / test_loss: 0.9265, test_acc: 97.81%
Epoch 163 - loss: 0.9192, acc: 98.56% / test_loss: 0.9277, test_acc: 97.68%
Epoch 164 - loss: 0.9194, acc: 98.57% / test_loss: 0.9266, test_acc: 97.82%
Epoch 165 - loss: 0.9204, acc: 98.45% / test_loss: 0.9287, test_acc: 97.59%
Epoch 166 - loss: 0.9196, acc: 98.51% / test_loss: 0.9286, test_acc: 97.64%
Epoch 167 - loss: 0.9185, acc: 98.64% / test_loss: 0.9258, test_acc: 97.88%
Epoch 168 - loss: 0.9196, acc: 98.57% / test_loss: 0.9250, test_acc: 97.96%
Epoch 169 - loss: 0.9189, acc: 98.59% / test_loss: 0.9241, test_acc: 98.06%
Epoch 170 - loss: 0.9173, acc: 98.74% / test_loss: 0.9283, test_acc: 97.61%
Epoch 171 - loss: 0.9171, acc: 98.78% / test_loss: 0.9246, test_acc: 97.96%
Epoch 172 - loss: 0.9172, acc: 98.77% / test_loss: 0.9264, test_acc: 97.88%
Epoch 173 - loss: 0.9205, acc: 98.46% / test_loss: 0.9279, test_acc: 97.71%
Epoch 174 - loss: 0.9189, acc: 98.57% / test_loss: 0.9259, test_acc: 97.89%
Epoch 175 - loss: 0.9182, acc: 98.67% / test_loss: 0.9245, test_acc: 98.02%
Epoch 176 - loss: 0.9171, acc: 98.78% / test_loss: 0.9263, test_acc: 97.79%
Epoch 177 - loss: 0.9180, acc: 98.69% / test_loss: 0.9260, test_acc: 97.92%
Epoch 178 - loss: 0.9186, acc: 98.62% / test_loss: 0.9286, test_acc: 97.68%
Epoch 179 - loss: 0.9171, acc: 98.79% / test_loss: 0.9238, test_acc: 98.07%
Epoch 180 - loss: 0.9168, acc: 98.80% / test_loss: 0.9253, test_acc: 97.98%
Epoch 181 - loss: 0.9171, acc: 98.78% / test_loss: 0.9231, test_acc: 98.20%
Epoch 182 - loss: 0.9164, acc: 98.85% / test_loss: 0.9225, test_acc: 98.23%
Epoch 183 - loss: 0.9162, acc: 98.86% / test_loss: 0.9240, test_acc: 98.10%
Epoch 184 - loss: 0.9185, acc: 98.63% / test_loss: 0.9247, test_acc: 97.98%
Epoch 185 - loss: 0.9167, acc: 98.84% / test_loss: 0.9228, test_acc: 98.20%
Epoch 186 - loss: 0.9160, acc: 98.91% / test_loss: 0.9240, test_acc: 98.07%
Epoch 187 - loss: 0.9172, acc: 98.75% / test_loss: 0.9266, test_acc: 97.83%
Epoch 188 - loss: 0.9166, acc: 98.81% / test_loss: 0.9236, test_acc: 98.13%
Epoch 189 - loss: 0.9171, acc: 98.77% / test_loss: 0.9246, test_acc: 98.03%
Epoch 190 - loss: 0.9175, acc: 98.74% / test_loss: 0.9255, test_acc: 97.89%
Epoch 191 - loss: 0.9173, acc: 98.74% / test_loss: 0.9271, test_acc: 97.74%
Epoch 192 - loss: 0.9179, acc: 98.69% / test_loss: 0.9305, test_acc: 97.44%
Epoch 193 - loss: 0.9181, acc: 98.68% / test_loss: 0.9245, test_acc: 98.01%
Epoch 194 - loss: 0.9163, acc: 98.85% / test_loss: 0.9254, test_acc: 97.92%
Epoch 195 - loss: 0.9156, acc: 98.93% / test_loss: 0.9230, test_acc: 98.20%
Epoch 196 - loss: 0.9159, acc: 98.91% / test_loss: 0.9236, test_acc: 98.13%
Epoch 197 - loss: 0.9158, acc: 98.92% / test_loss: 0.9233, test_acc: 98.19%
Epoch 198 - loss: 0.9161, acc: 98.89% / test_loss: 0.9282, test_acc: 97.67%
Epoch 199 - loss: 0.9179, acc: 98.70% / test_loss: 0.9265, test_acc: 97.82%
Epoch 200 - loss: 0.9165, acc: 98.85% / test_loss: 0.9245, test_acc: 98.01%
Epoch 201 - loss: 0.9167, acc: 98.78% / test_loss: 0.9250, test_acc: 97.96%
Epoch 202 - loss: 0.9167, acc: 98.84% / test_loss: 0.9258, test_acc: 97.85%
Epoch 203 - loss: 0.9160, acc: 98.88% / test_loss: 0.9232, test_acc: 98.14%
Epoch 204 - loss: 0.9158, acc: 98.91% / test_loss: 0.9241, test_acc: 98.07%
Epoch 205 - loss: 0.9168, acc: 98.80% / test_loss: 0.9247, test_acc: 97.98%
Epoch 206 - loss: 0.9189, acc: 98.59% / test_loss: 0.9253, test_acc: 97.95%
Epoch 207 - loss: 0.9166, acc: 98.83% / test_loss: 0.9245, test_acc: 98.03%
Epoch 208 - loss: 0.9158, acc: 98.91% / test_loss: 0.9236, test_acc: 98.10%
Epoch 209 - loss: 0.9161, acc: 98.88% / test_loss: 0.9239, test_acc: 98.09%
Epoch 210 - loss: 0.9156, acc: 98.94% / test_loss: 0.9239, test_acc: 98.07%
Epoch 211 - loss: 0.9170, acc: 98.77% / test_loss: 0.9281, test_acc: 97.60%
Epoch 212 - loss: 0.9151, acc: 98.97% / test_loss: 0.9224, test_acc: 98.26%
Epoch 213 - loss: 0.9161, acc: 98.86% / test_loss: 0.9246, test_acc: 98.00%
Epoch 214 - loss: 0.9169, acc: 98.78% / test_loss: 0.9298, test_acc: 97.49%
Epoch 215 - loss: 0.9166, acc: 98.80% / test_loss: 0.9269, test_acc: 97.80%
Epoch 216 - loss: 0.9159, acc: 98.91% / test_loss: 0.9230, test_acc: 98.19%
Epoch 217 - loss: 0.9164, acc: 98.88% / test_loss: 0.9231, test_acc: 98.18%
Epoch 218 - loss: 0.9167, acc: 98.81% / test_loss: 0.9251, test_acc: 98.00%
Epoch 219 - loss: 0.9167, acc: 98.81% / test_loss: 0.9243, test_acc: 98.05%
Epoch 220 - loss: 0.9157, acc: 98.94% / test_loss: 0.9258, test_acc: 97.91%
Epoch 221 - loss: 0.9164, acc: 98.86% / test_loss: 0.9243, test_acc: 98.02%
Epoch 222 - loss: 0.9160, acc: 98.89% / test_loss: 0.9246, test_acc: 98.04%
Epoch 223 - loss: 0.9163, acc: 98.84% / test_loss: 0.9252, test_acc: 97.92%
Epoch 224 - loss: 0.9156, acc: 98.94% / test_loss: 0.9301, test_acc: 97.46%
Epoch 225 - loss: 0.9162, acc: 98.88% / test_loss: 0.9225, test_acc: 98.23%
Epoch 226 - loss: 0.9148, acc: 99.00% / test_loss: 0.9247, test_acc: 98.01%
Epoch 227 - loss: 0.9163, acc: 98.87% / test_loss: 0.9236, test_acc: 98.11%
Epoch 228 - loss: 0.9156, acc: 98.94% / test_loss: 0.9243, test_acc: 98.04%
Epoch 229 - loss: 0.9162, acc: 98.88% / test_loss: 0.9259, test_acc: 97.87%
Epoch 230 - loss: 0.9176, acc: 98.71% / test_loss: 0.9233, test_acc: 98.12%
Epoch 231 - loss: 0.9155, acc: 98.91% / test_loss: 0.9235, test_acc: 98.14%
Epoch 232 - loss: 0.9162, acc: 98.88% / test_loss: 0.9230, test_acc: 98.18%
Epoch 233 - loss: 0.9162, acc: 98.86% / test_loss: 0.9237, test_acc: 98.10%
Epoch 234 - loss: 0.9167, acc: 98.81% / test_loss: 0.9230, test_acc: 98.18%
Epoch 235 - loss: 0.9172, acc: 98.76% / test_loss: 0.9270, test_acc: 97.79%
Epoch 236 - loss: 0.9167, acc: 98.80% / test_loss: 0.9216, test_acc: 98.33%
Epoch 237 - loss: 0.9154, acc: 98.96% / test_loss: 0.9222, test_acc: 98.26%
Epoch 238 - loss: 0.9162, acc: 98.88% / test_loss: 0.9223, test_acc: 98.26%
Epoch 239 - loss: 0.9164, acc: 98.84% / test_loss: 0.9250, test_acc: 97.95%
Epoch 240 - loss: 0.9152, acc: 98.97% / test_loss: 0.9228, test_acc: 98.22%
Epoch 241 - loss: 0.9144, acc: 99.05% / test_loss: 0.9253, test_acc: 97.96%
Epoch 242 - loss: 0.9159, acc: 98.88% / test_loss: 0.9229, test_acc: 98.19%
Epoch 243 - loss: 0.9156, acc: 98.94% / test_loss: 0.9231, test_acc: 98.17%
Epoch 244 - loss: 0.9169, acc: 98.80% / test_loss: 0.9229, test_acc: 98.18%
Epoch 245 - loss: 0.9155, acc: 98.93% / test_loss: 0.9234, test_acc: 98.14%
Epoch 246 - loss: 0.9164, acc: 98.84% / test_loss: 0.9223, test_acc: 98.24%
Epoch 247 - loss: 0.9147, acc: 99.02% / test_loss: 0.9225, test_acc: 98.18%
Epoch 248 - loss: 0.9164, acc: 98.81% / test_loss: 0.9263, test_acc: 97.85%
Epoch 249 - loss: 0.9159, acc: 98.89% / test_loss: 0.9223, test_acc: 98.26%
Epoch 250 - loss: 0.9148, acc: 99.00% / test_loss: 0.9225, test_acc: 98.24%
Epoch 251 - loss: 0.9152, acc: 98.97% / test_loss: 0.9225, test_acc: 98.20%
Epoch 252 - loss: 0.9153, acc: 98.95% / test_loss: 0.9239, test_acc: 98.07%
Epoch 253 - loss: 0.9139, acc: 99.10% / test_loss: 0.9221, test_acc: 98.26%
Epoch 254 - loss: 0.9157, acc: 98.92% / test_loss: 0.9227, test_acc: 98.18%
Epoch 255 - loss: 0.9149, acc: 99.01% / test_loss: 0.9214, test_acc: 98.32%
Epoch 256 - loss: 0.9145, acc: 99.02% / test_loss: 0.9215, test_acc: 98.35%
Epoch 257 - loss: 0.9159, acc: 98.88% / test_loss: 0.9220, test_acc: 98.28%
Epoch 258 - loss: 0.9152, acc: 98.96% / test_loss: 0.9242, test_acc: 98.01%
Epoch 259 - loss: 0.9163, acc: 98.85% / test_loss: 0.9226, test_acc: 98.23%
Epoch 260 - loss: 0.9155, acc: 98.94% / test_loss: 0.9224, test_acc: 98.23%
Epoch 261 - loss: 0.9138, acc: 99.10% / test_loss: 0.9211, test_acc: 98.38%
Epoch 262 - loss: 0.9147, acc: 99.03% / test_loss: 0.9235, test_acc: 98.11%
Epoch 263 - loss: 0.9158, acc: 98.91% / test_loss: 0.9234, test_acc: 98.16%
Epoch 264 - loss: 0.9157, acc: 98.91% / test_loss: 0.9266, test_acc: 97.81%
Epoch 265 - loss: 0.9154, acc: 98.95% / test_loss: 0.9216, test_acc: 98.32%
Epoch 266 - loss: 0.9150, acc: 98.98% / test_loss: 0.9285, test_acc: 97.64%
Epoch 267 - loss: 0.9158, acc: 98.92% / test_loss: 0.9245, test_acc: 98.01%
Epoch 268 - loss: 0.9159, acc: 98.90% / test_loss: 0.9221, test_acc: 98.30%
Epoch 269 - loss: 0.9151, acc: 98.96% / test_loss: 0.9218, test_acc: 98.30%
Epoch 270 - loss: 0.9146, acc: 99.03% / test_loss: 0.9215, test_acc: 98.32%
Epoch 271 - loss: 0.9136, acc: 99.12% / test_loss: 0.9215, test_acc: 98.31%
Epoch 272 - loss: 0.9139, acc: 99.10% / test_loss: 0.9213, test_acc: 98.35%
Epoch 273 - loss: 0.9159, acc: 98.91% / test_loss: 0.9228, test_acc: 98.20%
Epoch 274 - loss: 0.9165, acc: 98.84% / test_loss: 0.9282, test_acc: 97.61%
Epoch 275 - loss: 0.9164, acc: 98.83% / test_loss: 0.9248, test_acc: 97.98%
Epoch 276 - loss: 0.9164, acc: 98.84% / test_loss: 0.9233, test_acc: 98.14%
Epoch 277 - loss: 0.9148, acc: 99.00% / test_loss: 0.9230, test_acc: 98.14%
Epoch 278 - loss: 0.9143, acc: 99.05% / test_loss: 0.9225, test_acc: 98.23%
Epoch 279 - loss: 0.9148, acc: 99.00% / test_loss: 0.9226, test_acc: 98.23%
Epoch 280 - loss: 0.9140, acc: 99.07% / test_loss: 0.9222, test_acc: 98.27%
Epoch 281 - loss: 0.9144, acc: 99.06% / test_loss: 0.9239, test_acc: 98.08%
Epoch 282 - loss: 0.9153, acc: 98.96% / test_loss: 0.9216, test_acc: 98.32%
Epoch 283 - loss: 0.9139, acc: 99.09% / test_loss: 0.9230, test_acc: 98.17%
Epoch 284 - loss: 0.9149, acc: 99.00% / test_loss: 0.9217, test_acc: 98.29%
Epoch 285 - loss: 0.9157, acc: 98.89% / test_loss: 0.9222, test_acc: 98.27%
Epoch 286 - loss: 0.9143, acc: 99.06% / test_loss: 0.9214, test_acc: 98.35%
Epoch 287 - loss: 0.9141, acc: 99.08% / test_loss: 0.9226, test_acc: 98.23%
Epoch 288 - loss: 0.9140, acc: 99.09% / test_loss: 0.9218, test_acc: 98.29%
Epoch 289 - loss: 0.9140, acc: 99.09% / test_loss: 0.9218, test_acc: 98.32%
Epoch 290 - loss: 0.9141, acc: 99.09% / test_loss: 0.9213, test_acc: 98.33%
Epoch 291 - loss: 0.9150, acc: 98.99% / test_loss: 0.9235, test_acc: 98.14%
Epoch 292 - loss: 0.9158, acc: 98.90% / test_loss: 0.9233, test_acc: 98.14%
Epoch 293 - loss: 0.9140, acc: 99.09% / test_loss: 0.9220, test_acc: 98.28%
Epoch 294 - loss: 0.9161, acc: 98.87% / test_loss: 0.9253, test_acc: 97.94%
Epoch 295 - loss: 0.9163, acc: 98.86% / test_loss: 0.9237, test_acc: 98.11%
Epoch 296 - loss: 0.9147, acc: 99.01% / test_loss: 0.9237, test_acc: 98.10%
Epoch 297 - loss: 0.9145, acc: 99.04% / test_loss: 0.9229, test_acc: 98.16%
Epoch 298 - loss: 0.9159, acc: 98.91% / test_loss: 0.9236, test_acc: 98.07%
Epoch 299 - loss: 0.9144, acc: 99.06% / test_loss: 0.9223, test_acc: 98.24%
Epoch 300 - loss: 0.9156, acc: 98.91% / test_loss: 0.9235, test_acc: 98.13%
Epoch 301 - loss: 0.9141, acc: 99.07% / test_loss: 0.9219, test_acc: 98.29%
Epoch 302 - loss: 0.9144, acc: 99.03% / test_loss: 0.9215, test_acc: 98.32%
Epoch 303 - loss: 0.9146, acc: 99.00% / test_loss: 0.9231, test_acc: 98.18%
Epoch 304 - loss: 0.9138, acc: 99.10% / test_loss: 0.9221, test_acc: 98.26%
Epoch 305 - loss: 0.9152, acc: 98.95% / test_loss: 0.9222, test_acc: 98.26%
Epoch 306 - loss: 0.9141, acc: 99.06% / test_loss: 0.9201, test_acc: 98.46%
Epoch 307 - loss: 0.9143, acc: 99.06% / test_loss: 0.9218, test_acc: 98.29%
Epoch 308 - loss: 0.9138, acc: 99.09% / test_loss: 0.9226, test_acc: 98.20%
Epoch 309 - loss: 0.9144, acc: 99.03% / test_loss: 0.9247, test_acc: 98.00%
Epoch 310 - loss: 0.9167, acc: 98.80% / test_loss: 0.9217, test_acc: 98.31%
Epoch 311 - loss: 0.9147, acc: 99.02% / test_loss: 0.9239, test_acc: 98.07%
Epoch 312 - loss: 0.9138, acc: 99.11% / test_loss: 0.9235, test_acc: 98.11%
Epoch 313 - loss: 0.9149, acc: 99.01% / test_loss: 0.9226, test_acc: 98.20%
Epoch 314 - loss: 0.9147, acc: 99.03% / test_loss: 0.9234, test_acc: 98.14%
Epoch 315 - loss: 0.9147, acc: 99.01% / test_loss: 0.9241, test_acc: 98.07%
Epoch 316 - loss: 0.9157, acc: 98.92% / test_loss: 0.9240, test_acc: 98.07%
Epoch 317 - loss: 0.9146, acc: 99.03% / test_loss: 0.9225, test_acc: 98.23%
Epoch 318 - loss: 0.9132, acc: 99.16% / test_loss: 0.9219, test_acc: 98.28%
Epoch 319 - loss: 0.9146, acc: 99.02% / test_loss: 0.9222, test_acc: 98.26%
Epoch 320 - loss: 0.9150, acc: 98.98% / test_loss: 0.9234, test_acc: 98.12%
Epoch 321 - loss: 0.9148, acc: 99.01% / test_loss: 0.9218, test_acc: 98.30%
Epoch 322 - loss: 0.9134, acc: 99.15% / test_loss: 0.9248, test_acc: 98.00%
Epoch 323 - loss: 0.9136, acc: 99.12% / test_loss: 0.9209, test_acc: 98.39%
Epoch 324 - loss: 0.9147, acc: 99.00% / test_loss: 0.9218, test_acc: 98.28%
Epoch 325 - loss: 0.9139, acc: 99.10% / test_loss: 0.9220, test_acc: 98.25%
Epoch 326 - loss: 0.9139, acc: 99.09% / test_loss: 0.9229, test_acc: 98.20%
Epoch 327 - loss: 0.9149, acc: 99.00% / test_loss: 0.9221, test_acc: 98.27%
Epoch 328 - loss: 0.9141, acc: 99.07% / test_loss: 0.9221, test_acc: 98.26%
Epoch 329 - loss: 0.9145, acc: 99.03% / test_loss: 0.9259, test_acc: 97.88%
Epoch 330 - loss: 0.9134, acc: 99.14% / test_loss: 0.9226, test_acc: 98.21%
Epoch 331 - loss: 0.9145, acc: 99.04% / test_loss: 0.9223, test_acc: 98.25%
Epoch 332 - loss: 0.9137, acc: 99.10% / test_loss: 0.9211, test_acc: 98.35%
Epoch 333 - loss: 0.9141, acc: 99.09% / test_loss: 0.9215, test_acc: 98.32%
Epoch 334 - loss: 0.9164, acc: 98.81% / test_loss: 0.9222, test_acc: 98.23%
Epoch 335 - loss: 0.9154, acc: 98.95% / test_loss: 0.9216, test_acc: 98.29%
Epoch 336 - loss: 0.9148, acc: 99.00% / test_loss: 0.9222, test_acc: 98.26%
Epoch 337 - loss: 0.9132, acc: 99.17% / test_loss: 0.9220, test_acc: 98.28%
Epoch 338 - loss: 0.9141, acc: 99.06% / test_loss: 0.9215, test_acc: 98.32%
Epoch 339 - loss: 0.9133, acc: 99.16% / test_loss: 0.9245, test_acc: 98.03%
Epoch 340 - loss: 0.9144, acc: 99.05% / test_loss: 0.9240, test_acc: 98.04%
Epoch 341 - loss: 0.9150, acc: 98.99% / test_loss: 0.9251, test_acc: 97.93%
Epoch 342 - loss: 0.9144, acc: 99.04% / test_loss: 0.9223, test_acc: 98.23%
Epoch 343 - loss: 0.9136, acc: 99.12% / test_loss: 0.9223, test_acc: 98.26%
Epoch 344 - loss: 0.9151, acc: 98.99% / test_loss: 0.9237, test_acc: 98.13%
Epoch 345 - loss: 0.9137, acc: 99.11% / test_loss: 0.9213, test_acc: 98.32%
Epoch 346 - loss: 0.9129, acc: 99.19% / test_loss: 0.9207, test_acc: 98.37%
Epoch 347 - loss: 0.9147, acc: 99.02% / test_loss: 0.9220, test_acc: 98.28%
Epoch 348 - loss: 0.9134, acc: 99.14% / test_loss: 0.9216, test_acc: 98.32%
Epoch 349 - loss: 0.9137, acc: 99.12% / test_loss: 0.9217, test_acc: 98.32%
Epoch 350 - loss: 0.9142, acc: 99.06% / test_loss: 0.9262, test_acc: 97.86%
Epoch 351 - loss: 0.9167, acc: 98.81% / test_loss: 0.9231, test_acc: 98.18%
Epoch 352 - loss: 0.9142, acc: 99.06% / test_loss: 0.9212, test_acc: 98.36%
Epoch 353 - loss: 0.9137, acc: 99.11% / test_loss: 0.9231, test_acc: 98.16%
Epoch 354 - loss: 0.9142, acc: 99.06% / test_loss: 0.9213, test_acc: 98.35%
Epoch 355 - loss: 0.9140, acc: 99.09% / test_loss: 0.9238, test_acc: 98.10%
Epoch 356 - loss: 0.9142, acc: 99.06% / test_loss: 0.9250, test_acc: 97.93%
Epoch 357 - loss: 0.9147, acc: 99.01% / test_loss: 0.9234, test_acc: 98.13%
Epoch 358 - loss: 0.9135, acc: 99.12% / test_loss: 0.9224, test_acc: 98.23%
Epoch 359 - loss: 0.9143, acc: 99.06% / test_loss: 0.9251, test_acc: 97.96%
Epoch 360 - loss: 0.9142, acc: 99.06% / test_loss: 0.9224, test_acc: 98.22%
Epoch 361 - loss: 0.9147, acc: 99.00% / test_loss: 0.9232, test_acc: 98.18%
Epoch 362 - loss: 0.9148, acc: 99.00% / test_loss: 0.9219, test_acc: 98.27%
Epoch 363 - loss: 0.9136, acc: 99.12% / test_loss: 0.9225, test_acc: 98.24%
Epoch 364 - loss: 0.9132, acc: 99.16% / test_loss: 0.9218, test_acc: 98.29%
Epoch 365 - loss: 0.9153, acc: 98.94% / test_loss: 0.9250, test_acc: 97.98%
Epoch 366 - loss: 0.9150, acc: 98.99% / test_loss: 0.9231, test_acc: 98.14%
Epoch 367 - loss: 0.9137, acc: 99.11% / test_loss: 0.9211, test_acc: 98.38%
Epoch 368 - loss: 0.9133, acc: 99.15% / test_loss: 0.9209, test_acc: 98.39%
Epoch 369 - loss: 0.9145, acc: 99.03% / test_loss: 0.9217, test_acc: 98.31%
Epoch 370 - loss: 0.9141, acc: 99.08% / test_loss: 0.9229, test_acc: 98.20%
Epoch 371 - loss: 0.9143, acc: 99.06% / test_loss: 0.9236, test_acc: 98.14%
Epoch 372 - loss: 0.9175, acc: 98.74% / test_loss: 0.9325, test_acc: 97.18%
Epoch 373 - loss: 0.9159, acc: 98.88% / test_loss: 0.9204, test_acc: 98.44%
Epoch 374 - loss: 0.9135, acc: 99.13% / test_loss: 0.9213, test_acc: 98.32%
Epoch 375 - loss: 0.9147, acc: 99.01% / test_loss: 0.9211, test_acc: 98.38%
Epoch 376 - loss: 0.9148, acc: 98.98% / test_loss: 0.9220, test_acc: 98.25%
Epoch 377 - loss: 0.9144, acc: 99.03% / test_loss: 0.9236, test_acc: 98.11%
Epoch 378 - loss: 0.9139, acc: 99.09% / test_loss: 0.9219, test_acc: 98.28%
Epoch 379 - loss: 0.9143, acc: 99.06% / test_loss: 0.9216, test_acc: 98.31%
Epoch 380 - loss: 0.9147, acc: 99.00% / test_loss: 0.9235, test_acc: 98.10%
Epoch 381 - loss: 0.9141, acc: 99.07% / test_loss: 0.9210, test_acc: 98.35%
Epoch 382 - loss: 0.9140, acc: 99.09% / test_loss: 0.9252, test_acc: 97.95%
Epoch 383 - loss: 0.9143, acc: 99.05% / test_loss: 0.9216, test_acc: 98.31%
Epoch 384 - loss: 0.9136, acc: 99.12% / test_loss: 0.9207, test_acc: 98.41%
Epoch 385 - loss: 0.9142, acc: 99.07% / test_loss: 0.9261, test_acc: 97.88%
Epoch 386 - loss: 0.9154, acc: 98.93% / test_loss: 0.9227, test_acc: 98.22%
Epoch 387 - loss: 0.9133, acc: 99.15% / test_loss: 0.9216, test_acc: 98.32%
Epoch 388 - loss: 0.9128, acc: 99.20% / test_loss: 0.9209, test_acc: 98.39%
Epoch 389 - loss: 0.9125, acc: 99.23% / test_loss: 0.9209, test_acc: 98.38%
Epoch 390 - loss: 0.9133, acc: 99.15% / test_loss: 0.9260, test_acc: 97.89%
Epoch 391 - loss: 0.9156, acc: 98.92% / test_loss: 0.9239, test_acc: 98.07%
Epoch 392 - loss: 0.9139, acc: 99.11% / test_loss: 0.9230, test_acc: 98.16%
Epoch 393 - loss: 0.9142, acc: 99.06% / test_loss: 0.9209, test_acc: 98.40%
Epoch 394 - loss: 0.9129, acc: 99.20% / test_loss: 0.9212, test_acc: 98.38%
Epoch 395 - loss: 0.9133, acc: 99.15% / test_loss: 0.9206, test_acc: 98.41%
Epoch 396 - loss: 0.9143, acc: 99.05% / test_loss: 0.9233, test_acc: 98.15%
Epoch 397 - loss: 0.9148, acc: 99.00% / test_loss: 0.9228, test_acc: 98.21%
Epoch 398 - loss: 0.9149, acc: 98.99% / test_loss: 0.9263, test_acc: 97.81%
Epoch 399 - loss: 0.9147, acc: 99.00% / test_loss: 0.9226, test_acc: 98.23%
Epoch 400 - loss: 0.9143, acc: 99.06% / test_loss: 0.9213, test_acc: 98.35%
Best test accuracy 98.46% in epoch 306.
----------------------------------------------------------------------------------------------------
Run 2
Epoch 1 - loss: 1.3611, acc: 55.37% / test_loss: 1.2540, test_acc: 64.42%
Epoch 2 - loss: 1.1441, acc: 77.27% / test_loss: 1.0729, test_acc: 84.79%
Epoch 3 - loss: 1.0593, acc: 85.32% / test_loss: 1.0374, test_acc: 87.67%
Epoch 4 - loss: 1.0470, acc: 86.33% / test_loss: 1.0413, test_acc: 86.73%
Epoch 5 - loss: 1.0408, acc: 86.60% / test_loss: 1.0293, test_acc: 87.67%
Epoch 6 - loss: 1.0347, acc: 87.12% / test_loss: 1.0218, test_acc: 88.44%
Epoch 7 - loss: 1.0315, acc: 87.47% / test_loss: 1.0236, test_acc: 88.23%
Epoch 8 - loss: 1.0282, acc: 87.70% / test_loss: 1.0220, test_acc: 88.38%
Epoch 9 - loss: 1.0235, acc: 88.15% / test_loss: 1.0176, test_acc: 88.81%
Epoch 10 - loss: 1.0241, acc: 88.09% / test_loss: 1.0149, test_acc: 89.01%
Epoch 11 - loss: 1.0232, acc: 88.21% / test_loss: 1.0114, test_acc: 89.25%
Epoch 12 - loss: 1.0214, acc: 88.30% / test_loss: 1.0114, test_acc: 89.28%
Epoch 13 - loss: 1.0200, acc: 88.46% / test_loss: 1.0133, test_acc: 89.10%
Epoch 14 - loss: 1.0199, acc: 88.39% / test_loss: 1.0131, test_acc: 89.16%
Epoch 15 - loss: 1.0162, acc: 88.79% / test_loss: 1.0111, test_acc: 89.27%
Epoch 16 - loss: 1.0153, acc: 88.86% / test_loss: 1.0111, test_acc: 89.26%
Epoch 17 - loss: 1.0158, acc: 88.81% / test_loss: 1.0086, test_acc: 89.47%
Epoch 18 - loss: 1.0150, acc: 88.86% / test_loss: 1.0074, test_acc: 89.65%
Epoch 19 - loss: 1.0134, acc: 89.05% / test_loss: 1.0072, test_acc: 89.60%
Epoch 20 - loss: 1.0113, acc: 89.24% / test_loss: 1.0088, test_acc: 89.50%
Epoch 21 - loss: 1.0132, acc: 89.04% / test_loss: 1.0054, test_acc: 89.77%
Epoch 22 - loss: 1.0097, acc: 89.40% / test_loss: 1.0032, test_acc: 90.06%
Epoch 23 - loss: 1.0080, acc: 89.57% / test_loss: 1.0013, test_acc: 90.19%
Epoch 24 - loss: 1.0084, acc: 89.53% / test_loss: 1.0032, test_acc: 90.09%
Epoch 25 - loss: 1.0075, acc: 89.60% / test_loss: 1.0028, test_acc: 90.03%
Epoch 26 - loss: 1.0072, acc: 89.60% / test_loss: 1.0014, test_acc: 90.15%
Epoch 27 - loss: 1.0057, acc: 89.72% / test_loss: 1.0017, test_acc: 90.09%
Epoch 28 - loss: 1.0052, acc: 89.77% / test_loss: 1.0063, test_acc: 89.76%
Epoch 29 - loss: 1.0063, acc: 89.69% / test_loss: 0.9996, test_acc: 90.34%
Epoch 30 - loss: 1.0052, acc: 89.78% / test_loss: 1.0029, test_acc: 90.00%
Epoch 31 - loss: 1.0050, acc: 89.82% / test_loss: 1.0021, test_acc: 90.15%
Epoch 32 - loss: 1.0058, acc: 89.76% / test_loss: 0.9996, test_acc: 90.36%
Epoch 33 - loss: 1.0034, acc: 89.92% / test_loss: 1.0028, test_acc: 90.09%
Epoch 34 - loss: 1.0054, acc: 89.77% / test_loss: 1.0117, test_acc: 89.10%
Epoch 35 - loss: 1.0041, acc: 89.89% / test_loss: 0.9986, test_acc: 90.41%
Epoch 36 - loss: 1.0018, acc: 90.09% / test_loss: 0.9978, test_acc: 90.48%
Epoch 37 - loss: 1.0022, acc: 90.08% / test_loss: 0.9991, test_acc: 90.34%
Epoch 38 - loss: 1.0026, acc: 90.03% / test_loss: 1.0001, test_acc: 90.28%
Epoch 39 - loss: 1.0017, acc: 90.09% / test_loss: 0.9979, test_acc: 90.46%
Epoch 40 - loss: 1.0019, acc: 90.06% / test_loss: 0.9988, test_acc: 90.43%
Epoch 41 - loss: 1.0014, acc: 90.12% / test_loss: 0.9975, test_acc: 90.49%
Epoch 42 - loss: 1.0008, acc: 90.18% / test_loss: 0.9980, test_acc: 90.46%
Epoch 43 - loss: 1.0008, acc: 90.18% / test_loss: 0.9979, test_acc: 90.55%
Epoch 44 - loss: 1.0009, acc: 90.15% / test_loss: 0.9966, test_acc: 90.57%
Epoch 45 - loss: 0.9994, acc: 90.32% / test_loss: 0.9958, test_acc: 90.66%
Epoch 46 - loss: 0.9985, acc: 90.40% / test_loss: 0.9950, test_acc: 90.71%
Epoch 47 - loss: 0.9977, acc: 90.48% / test_loss: 0.9964, test_acc: 90.59%
Epoch 48 - loss: 0.9982, acc: 90.47% / test_loss: 0.9979, test_acc: 90.48%
Epoch 49 - loss: 0.9980, acc: 90.43% / test_loss: 0.9965, test_acc: 90.55%
Epoch 50 - loss: 0.9994, acc: 90.28% / test_loss: 0.9965, test_acc: 90.64%
Epoch 51 - loss: 0.9984, acc: 90.38% / test_loss: 0.9957, test_acc: 90.74%
Epoch 52 - loss: 0.9962, acc: 90.63% / test_loss: 0.9997, test_acc: 90.27%
Epoch 53 - loss: 0.9970, acc: 90.59% / test_loss: 0.9961, test_acc: 90.63%
Epoch 54 - loss: 0.9942, acc: 90.80% / test_loss: 0.9944, test_acc: 90.85%
Epoch 55 - loss: 0.9917, acc: 91.05% / test_loss: 1.0024, test_acc: 89.94%
Epoch 56 - loss: 0.9886, acc: 91.41% / test_loss: 0.9862, test_acc: 91.59%
Epoch 57 - loss: 0.9871, acc: 91.51% / test_loss: 0.9856, test_acc: 91.72%
Epoch 58 - loss: 0.9832, acc: 91.88% / test_loss: 0.9827, test_acc: 91.97%
Epoch 59 - loss: 0.9826, acc: 92.00% / test_loss: 0.9856, test_acc: 91.79%
Epoch 60 - loss: 0.9820, acc: 92.00% / test_loss: 0.9844, test_acc: 91.94%
Epoch 61 - loss: 0.9807, acc: 92.18% / test_loss: 0.9832, test_acc: 91.89%
Epoch 62 - loss: 0.9794, acc: 92.26% / test_loss: 0.9811, test_acc: 92.10%
Epoch 63 - loss: 0.9800, acc: 92.21% / test_loss: 0.9831, test_acc: 91.91%
Epoch 64 - loss: 0.9790, acc: 92.34% / test_loss: 0.9805, test_acc: 92.20%
Epoch 65 - loss: 0.9781, acc: 92.43% / test_loss: 0.9790, test_acc: 92.28%
Epoch 66 - loss: 0.9786, acc: 92.35% / test_loss: 0.9814, test_acc: 92.04%
Epoch 67 - loss: 0.9764, acc: 92.56% / test_loss: 0.9797, test_acc: 92.19%
Epoch 68 - loss: 0.9775, acc: 92.47% / test_loss: 0.9784, test_acc: 92.29%
Epoch 69 - loss: 0.9775, acc: 92.46% / test_loss: 0.9818, test_acc: 91.99%
Epoch 70 - loss: 0.9781, acc: 92.43% / test_loss: 0.9790, test_acc: 92.28%
Epoch 71 - loss: 0.9767, acc: 92.50% / test_loss: 0.9777, test_acc: 92.40%
Epoch 72 - loss: 0.9764, acc: 92.56% / test_loss: 0.9793, test_acc: 92.25%
Epoch 73 - loss: 0.9777, acc: 92.44% / test_loss: 0.9785, test_acc: 92.35%
Epoch 74 - loss: 0.9763, acc: 92.53% / test_loss: 0.9786, test_acc: 92.37%
Epoch 75 - loss: 0.9757, acc: 92.61% / test_loss: 0.9776, test_acc: 92.43%
Epoch 76 - loss: 0.9752, acc: 92.63% / test_loss: 0.9800, test_acc: 92.30%
Epoch 77 - loss: 0.9755, acc: 92.63% / test_loss: 0.9799, test_acc: 92.21%
Epoch 78 - loss: 0.9749, acc: 92.69% / test_loss: 0.9768, test_acc: 92.53%
Epoch 79 - loss: 0.9750, acc: 92.69% / test_loss: 0.9770, test_acc: 92.49%
Epoch 80 - loss: 0.9755, acc: 92.62% / test_loss: 0.9764, test_acc: 92.57%
Epoch 81 - loss: 0.9750, acc: 92.70% / test_loss: 0.9822, test_acc: 91.95%
Epoch 82 - loss: 0.9749, acc: 92.68% / test_loss: 0.9779, test_acc: 92.41%
Epoch 83 - loss: 0.9744, acc: 92.74% / test_loss: 0.9759, test_acc: 92.56%
Epoch 84 - loss: 0.9740, acc: 92.77% / test_loss: 0.9796, test_acc: 92.23%
Epoch 85 - loss: 0.9736, acc: 92.81% / test_loss: 0.9780, test_acc: 92.36%
Epoch 86 - loss: 0.9767, acc: 92.50% / test_loss: 0.9791, test_acc: 92.36%
Epoch 87 - loss: 0.9745, acc: 92.71% / test_loss: 0.9788, test_acc: 92.37%
Epoch 88 - loss: 0.9736, acc: 92.79% / test_loss: 0.9769, test_acc: 92.49%
Epoch 89 - loss: 0.9744, acc: 92.74% / test_loss: 0.9762, test_acc: 92.56%
Epoch 90 - loss: 0.9738, acc: 92.81% / test_loss: 0.9775, test_acc: 92.46%
Epoch 91 - loss: 0.9738, acc: 92.79% / test_loss: 0.9868, test_acc: 91.50%
Epoch 92 - loss: 0.9738, acc: 92.82% / test_loss: 0.9761, test_acc: 92.55%
Epoch 93 - loss: 0.9725, acc: 92.91% / test_loss: 0.9760, test_acc: 92.53%
Epoch 94 - loss: 0.9739, acc: 92.79% / test_loss: 0.9772, test_acc: 92.44%
Epoch 95 - loss: 0.9729, acc: 92.87% / test_loss: 0.9815, test_acc: 92.00%
Epoch 96 - loss: 0.9728, acc: 92.87% / test_loss: 0.9756, test_acc: 92.61%
Epoch 97 - loss: 0.9732, acc: 92.85% / test_loss: 0.9770, test_acc: 92.51%
Epoch 98 - loss: 0.9719, acc: 92.95% / test_loss: 0.9748, test_acc: 92.65%
Epoch 99 - loss: 0.9726, acc: 92.93% / test_loss: 0.9767, test_acc: 92.48%
Epoch 100 - loss: 0.9717, acc: 92.99% / test_loss: 0.9812, test_acc: 92.00%
Epoch 101 - loss: 0.9737, acc: 92.78% / test_loss: 0.9761, test_acc: 92.57%
Epoch 102 - loss: 0.9734, acc: 92.80% / test_loss: 0.9773, test_acc: 92.46%
Epoch 103 - loss: 0.9738, acc: 92.77% / test_loss: 0.9767, test_acc: 92.46%
Epoch 104 - loss: 0.9733, acc: 92.80% / test_loss: 0.9767, test_acc: 92.48%
Epoch 105 - loss: 0.9737, acc: 92.81% / test_loss: 0.9775, test_acc: 92.38%
Epoch 106 - loss: 0.9724, acc: 92.89% / test_loss: 0.9768, test_acc: 92.46%
Epoch 107 - loss: 0.9729, acc: 92.87% / test_loss: 0.9768, test_acc: 92.48%
Epoch 108 - loss: 0.9715, acc: 92.98% / test_loss: 0.9756, test_acc: 92.58%
Epoch 109 - loss: 0.9717, acc: 92.96% / test_loss: 0.9748, test_acc: 92.65%
Epoch 110 - loss: 0.9718, acc: 92.95% / test_loss: 0.9772, test_acc: 92.45%
Epoch 111 - loss: 0.9729, acc: 92.87% / test_loss: 0.9764, test_acc: 92.51%
Epoch 112 - loss: 0.9746, acc: 92.72% / test_loss: 0.9753, test_acc: 92.59%
Epoch 113 - loss: 0.9713, acc: 92.99% / test_loss: 0.9775, test_acc: 92.42%
Epoch 114 - loss: 0.9716, acc: 92.98% / test_loss: 0.9747, test_acc: 92.68%
Epoch 115 - loss: 0.9733, acc: 92.82% / test_loss: 0.9777, test_acc: 92.38%
Epoch 116 - loss: 0.9723, acc: 92.95% / test_loss: 0.9751, test_acc: 92.60%
Epoch 117 - loss: 0.9713, acc: 93.03% / test_loss: 0.9755, test_acc: 92.61%
Epoch 118 - loss: 0.9726, acc: 92.87% / test_loss: 0.9748, test_acc: 92.65%
Epoch 119 - loss: 0.9709, acc: 93.05% / test_loss: 0.9742, test_acc: 92.74%
Epoch 120 - loss: 0.9712, acc: 92.99% / test_loss: 0.9765, test_acc: 92.50%
Epoch 121 - loss: 0.9730, acc: 92.84% / test_loss: 0.9769, test_acc: 92.47%
Epoch 122 - loss: 0.9717, acc: 92.95% / test_loss: 0.9756, test_acc: 92.59%
Epoch 123 - loss: 0.9716, acc: 92.96% / test_loss: 0.9750, test_acc: 92.65%
Epoch 124 - loss: 0.9713, acc: 92.99% / test_loss: 0.9753, test_acc: 92.62%
Epoch 125 - loss: 0.9704, acc: 93.09% / test_loss: 0.9748, test_acc: 92.65%
Epoch 126 - loss: 0.9718, acc: 92.93% / test_loss: 0.9780, test_acc: 92.34%
Epoch 127 - loss: 0.9713, acc: 93.01% / test_loss: 0.9754, test_acc: 92.56%
Epoch 128 - loss: 0.9708, acc: 93.05% / test_loss: 0.9763, test_acc: 92.50%
Epoch 129 - loss: 0.9715, acc: 92.98% / test_loss: 0.9741, test_acc: 92.71%
Epoch 130 - loss: 0.9701, acc: 93.10% / test_loss: 0.9766, test_acc: 92.45%
Epoch 131 - loss: 0.9711, acc: 93.01% / test_loss: 0.9757, test_acc: 92.58%
Epoch 132 - loss: 0.9701, acc: 93.12% / test_loss: 0.9740, test_acc: 92.72%
Epoch 133 - loss: 0.9701, acc: 93.10% / test_loss: 0.9769, test_acc: 92.40%
Epoch 134 - loss: 0.9713, acc: 93.01% / test_loss: 0.9766, test_acc: 92.47%
Epoch 135 - loss: 0.9708, acc: 93.04% / test_loss: 0.9762, test_acc: 92.56%
Epoch 136 - loss: 0.9703, acc: 93.11% / test_loss: 0.9754, test_acc: 92.58%
Epoch 137 - loss: 0.9708, acc: 93.04% / test_loss: 0.9751, test_acc: 92.63%
Epoch 138 - loss: 0.9707, acc: 93.06% / test_loss: 0.9743, test_acc: 92.70%
Epoch 139 - loss: 0.9722, acc: 92.91% / test_loss: 0.9757, test_acc: 92.50%
Epoch 140 - loss: 0.9713, acc: 92.98% / test_loss: 0.9750, test_acc: 92.64%
Epoch 141 - loss: 0.9696, acc: 93.16% / test_loss: 0.9742, test_acc: 92.71%
Epoch 142 - loss: 0.9691, acc: 93.22% / test_loss: 0.9744, test_acc: 92.67%
Epoch 143 - loss: 0.9688, acc: 93.24% / test_loss: 0.9744, test_acc: 92.65%
Epoch 144 - loss: 0.9685, acc: 93.27% / test_loss: 0.9762, test_acc: 92.53%
Epoch 145 - loss: 0.9702, acc: 93.10% / test_loss: 0.9744, test_acc: 92.68%
Epoch 146 - loss: 0.9692, acc: 93.20% / test_loss: 0.9736, test_acc: 92.77%
Epoch 147 - loss: 0.9702, acc: 93.11% / test_loss: 0.9764, test_acc: 92.50%
Epoch 148 - loss: 0.9692, acc: 93.21% / test_loss: 0.9733, test_acc: 92.80%
Epoch 149 - loss: 0.9689, acc: 93.20% / test_loss: 0.9744, test_acc: 92.68%
Epoch 150 - loss: 0.9691, acc: 93.23% / test_loss: 0.9758, test_acc: 92.64%
Epoch 151 - loss: 0.9685, acc: 93.29% / test_loss: 0.9751, test_acc: 92.59%
Epoch 152 - loss: 0.9690, acc: 93.21% / test_loss: 0.9726, test_acc: 92.90%
Epoch 153 - loss: 0.9683, acc: 93.26% / test_loss: 0.9756, test_acc: 92.65%
Epoch 154 - loss: 0.9676, acc: 93.36% / test_loss: 0.9716, test_acc: 92.95%
Epoch 155 - loss: 0.9668, acc: 93.42% / test_loss: 0.9725, test_acc: 92.87%
Epoch 156 - loss: 0.9690, acc: 93.20% / test_loss: 0.9763, test_acc: 92.54%
Epoch 157 - loss: 0.9676, acc: 93.36% / test_loss: 0.9717, test_acc: 92.96%
Epoch 158 - loss: 0.9665, acc: 93.46% / test_loss: 0.9715, test_acc: 92.99%
Epoch 159 - loss: 0.9663, acc: 93.47% / test_loss: 0.9706, test_acc: 93.05%
Epoch 160 - loss: 0.9669, acc: 93.43% / test_loss: 0.9745, test_acc: 92.68%
Epoch 161 - loss: 0.9661, acc: 93.50% / test_loss: 0.9718, test_acc: 92.96%
Epoch 162 - loss: 0.9672, acc: 93.39% / test_loss: 0.9731, test_acc: 92.78%
Epoch 163 - loss: 0.9667, acc: 93.46% / test_loss: 0.9710, test_acc: 93.05%
Epoch 164 - loss: 0.9659, acc: 93.52% / test_loss: 0.9705, test_acc: 93.09%
Epoch 165 - loss: 0.9671, acc: 93.43% / test_loss: 0.9713, test_acc: 92.93%
Epoch 166 - loss: 0.9669, acc: 93.44% / test_loss: 0.9761, test_acc: 92.50%
Epoch 167 - loss: 0.9665, acc: 93.47% / test_loss: 0.9706, test_acc: 93.08%
Epoch 168 - loss: 0.9670, acc: 93.44% / test_loss: 0.9711, test_acc: 93.01%
Epoch 169 - loss: 0.9663, acc: 93.47% / test_loss: 0.9689, test_acc: 93.25%
Epoch 170 - loss: 0.9645, acc: 93.64% / test_loss: 0.9706, test_acc: 93.08%
Epoch 171 - loss: 0.9652, acc: 93.58% / test_loss: 0.9712, test_acc: 92.99%
Epoch 172 - loss: 0.9669, acc: 93.40% / test_loss: 0.9695, test_acc: 93.16%
Epoch 173 - loss: 0.9660, acc: 93.51% / test_loss: 0.9719, test_acc: 92.96%
Epoch 174 - loss: 0.9663, acc: 93.50% / test_loss: 0.9695, test_acc: 93.14%
Epoch 175 - loss: 0.9652, acc: 93.59% / test_loss: 0.9690, test_acc: 93.22%
Epoch 176 - loss: 0.9644, acc: 93.65% / test_loss: 0.9702, test_acc: 93.08%
Epoch 177 - loss: 0.9660, acc: 93.52% / test_loss: 0.9727, test_acc: 92.83%
Epoch 178 - loss: 0.9668, acc: 93.42% / test_loss: 0.9699, test_acc: 93.14%
Epoch 179 - loss: 0.9659, acc: 93.50% / test_loss: 0.9697, test_acc: 93.13%
Epoch 180 - loss: 0.9647, acc: 93.64% / test_loss: 0.9695, test_acc: 93.17%
Epoch 181 - loss: 0.9652, acc: 93.57% / test_loss: 0.9711, test_acc: 93.01%
Epoch 182 - loss: 0.9662, acc: 93.51% / test_loss: 0.9694, test_acc: 93.17%
Epoch 183 - loss: 0.9659, acc: 93.52% / test_loss: 0.9733, test_acc: 92.78%
Epoch 184 - loss: 0.9662, acc: 93.47% / test_loss: 0.9694, test_acc: 93.18%
Epoch 185 - loss: 0.9641, acc: 93.69% / test_loss: 0.9688, test_acc: 93.24%
Epoch 186 - loss: 0.9636, acc: 93.72% / test_loss: 0.9702, test_acc: 93.07%
Epoch 187 - loss: 0.9660, acc: 93.48% / test_loss: 0.9696, test_acc: 93.18%
Epoch 188 - loss: 0.9663, acc: 93.49% / test_loss: 0.9766, test_acc: 92.52%
Epoch 189 - loss: 0.9649, acc: 93.64% / test_loss: 0.9710, test_acc: 93.02%
Epoch 190 - loss: 0.9651, acc: 93.59% / test_loss: 0.9697, test_acc: 93.14%
Epoch 191 - loss: 0.9640, acc: 93.69% / test_loss: 0.9689, test_acc: 93.20%
Epoch 192 - loss: 0.9647, acc: 93.61% / test_loss: 0.9708, test_acc: 93.06%
Epoch 193 - loss: 0.9647, acc: 93.62% / test_loss: 0.9703, test_acc: 93.08%
Epoch 194 - loss: 0.9641, acc: 93.70% / test_loss: 0.9700, test_acc: 93.11%
Epoch 195 - loss: 0.9643, acc: 93.67% / test_loss: 0.9687, test_acc: 93.24%
Epoch 196 - loss: 0.9637, acc: 93.72% / test_loss: 0.9689, test_acc: 93.21%
Epoch 197 - loss: 0.9668, acc: 93.42% / test_loss: 0.9697, test_acc: 93.12%
Epoch 198 - loss: 0.9645, acc: 93.67% / test_loss: 0.9691, test_acc: 93.20%
Epoch 199 - loss: 0.9645, acc: 93.64% / test_loss: 0.9691, test_acc: 93.18%
Epoch 200 - loss: 0.9643, acc: 93.66% / test_loss: 0.9705, test_acc: 93.13%
Epoch 201 - loss: 0.9647, acc: 93.63% / test_loss: 0.9688, test_acc: 93.22%
Epoch 202 - loss: 0.9638, acc: 93.75% / test_loss: 0.9702, test_acc: 93.14%
Epoch 203 - loss: 0.9619, acc: 93.94% / test_loss: 0.9806, test_acc: 92.13%
Epoch 204 - loss: 0.9606, acc: 94.07% / test_loss: 0.9710, test_acc: 93.11%
Epoch 205 - loss: 0.9615, acc: 93.95% / test_loss: 0.9672, test_acc: 93.48%
Epoch 206 - loss: 0.9606, acc: 94.10% / test_loss: 0.9667, test_acc: 93.50%
Epoch 207 - loss: 0.9267, acc: 97.82% / test_loss: 0.9281, test_acc: 97.71%
Epoch 208 - loss: 0.9178, acc: 98.76% / test_loss: 0.9258, test_acc: 97.98%
Epoch 209 - loss: 0.9178, acc: 98.73% / test_loss: 0.9259, test_acc: 97.86%
Epoch 210 - loss: 0.9172, acc: 98.78% / test_loss: 0.9276, test_acc: 97.74%
Epoch 211 - loss: 0.9166, acc: 98.84% / test_loss: 0.9249, test_acc: 97.98%
Epoch 212 - loss: 0.9173, acc: 98.78% / test_loss: 0.9239, test_acc: 98.16%
Epoch 213 - loss: 0.9165, acc: 98.84% / test_loss: 0.9275, test_acc: 97.79%
Epoch 214 - loss: 0.9169, acc: 98.80% / test_loss: 0.9265, test_acc: 97.87%
Epoch 215 - loss: 0.9177, acc: 98.75% / test_loss: 0.9235, test_acc: 98.15%
Epoch 216 - loss: 0.9153, acc: 98.96% / test_loss: 0.9241, test_acc: 98.09%
Epoch 217 - loss: 0.9151, acc: 98.97% / test_loss: 0.9276, test_acc: 97.73%
Epoch 218 - loss: 0.9170, acc: 98.78% / test_loss: 0.9249, test_acc: 97.97%
Epoch 219 - loss: 0.9160, acc: 98.89% / test_loss: 0.9219, test_acc: 98.32%
Epoch 220 - loss: 0.9157, acc: 98.94% / test_loss: 0.9238, test_acc: 98.10%
Epoch 221 - loss: 0.9150, acc: 99.00% / test_loss: 0.9238, test_acc: 98.12%
Epoch 222 - loss: 0.9152, acc: 98.97% / test_loss: 0.9242, test_acc: 98.03%
Epoch 223 - loss: 0.9157, acc: 98.93% / test_loss: 0.9227, test_acc: 98.22%
Epoch 224 - loss: 0.9146, acc: 99.06% / test_loss: 0.9231, test_acc: 98.15%
Epoch 225 - loss: 0.9162, acc: 98.89% / test_loss: 0.9247, test_acc: 98.01%
Epoch 226 - loss: 0.9149, acc: 99.02% / test_loss: 0.9290, test_acc: 97.61%
Epoch 227 - loss: 0.9172, acc: 98.76% / test_loss: 0.9226, test_acc: 98.23%
Epoch 228 - loss: 0.9146, acc: 99.04% / test_loss: 0.9231, test_acc: 98.14%
Epoch 229 - loss: 0.9146, acc: 99.02% / test_loss: 0.9224, test_acc: 98.26%
Epoch 230 - loss: 0.9143, acc: 99.08% / test_loss: 0.9267, test_acc: 97.83%
Epoch 231 - loss: 0.9155, acc: 98.98% / test_loss: 0.9230, test_acc: 98.19%
Epoch 232 - loss: 0.9147, acc: 99.00% / test_loss: 0.9255, test_acc: 97.92%
Epoch 233 - loss: 0.9162, acc: 98.86% / test_loss: 0.9237, test_acc: 98.10%
Epoch 234 - loss: 0.9145, acc: 99.05% / test_loss: 0.9247, test_acc: 98.01%
Epoch 235 - loss: 0.9153, acc: 98.98% / test_loss: 0.9227, test_acc: 98.24%
Epoch 236 - loss: 0.9148, acc: 99.01% / test_loss: 0.9228, test_acc: 98.20%
Epoch 237 - loss: 0.9142, acc: 99.09% / test_loss: 0.9219, test_acc: 98.30%
Epoch 238 - loss: 0.9141, acc: 99.09% / test_loss: 0.9251, test_acc: 97.99%
Epoch 239 - loss: 0.9150, acc: 99.00% / test_loss: 0.9227, test_acc: 98.21%
Epoch 240 - loss: 0.9142, acc: 99.07% / test_loss: 0.9240, test_acc: 98.10%
Epoch 241 - loss: 0.9153, acc: 98.97% / test_loss: 0.9244, test_acc: 98.08%
Epoch 242 - loss: 0.9150, acc: 99.01% / test_loss: 0.9263, test_acc: 97.84%
Epoch 243 - loss: 0.9140, acc: 99.10% / test_loss: 0.9266, test_acc: 97.82%
Epoch 244 - loss: 0.9140, acc: 99.09% / test_loss: 0.9228, test_acc: 98.18%
Epoch 245 - loss: 0.9157, acc: 98.92% / test_loss: 0.9246, test_acc: 98.01%
Epoch 246 - loss: 0.9162, acc: 98.85% / test_loss: 0.9229, test_acc: 98.21%
Epoch 247 - loss: 0.9143, acc: 99.04% / test_loss: 0.9243, test_acc: 98.08%
Epoch 248 - loss: 0.9141, acc: 99.08% / test_loss: 0.9224, test_acc: 98.26%
Epoch 249 - loss: 0.9138, acc: 99.11% / test_loss: 0.9227, test_acc: 98.23%
Epoch 250 - loss: 0.9151, acc: 98.97% / test_loss: 0.9213, test_acc: 98.38%
Epoch 251 - loss: 0.9140, acc: 99.12% / test_loss: 0.9230, test_acc: 98.21%
Epoch 252 - loss: 0.9139, acc: 99.10% / test_loss: 0.9260, test_acc: 97.88%
Epoch 253 - loss: 0.9161, acc: 98.84% / test_loss: 0.9253, test_acc: 97.99%
Epoch 254 - loss: 0.9144, acc: 99.05% / test_loss: 0.9220, test_acc: 98.28%
Epoch 255 - loss: 0.9136, acc: 99.13% / test_loss: 0.9222, test_acc: 98.26%
Epoch 256 - loss: 0.9147, acc: 99.03% / test_loss: 0.9221, test_acc: 98.27%
Epoch 257 - loss: 0.9136, acc: 99.12% / test_loss: 0.9233, test_acc: 98.15%
Epoch 258 - loss: 0.9138, acc: 99.11% / test_loss: 0.9222, test_acc: 98.28%
Epoch 259 - loss: 0.9134, acc: 99.14% / test_loss: 0.9225, test_acc: 98.22%
Epoch 260 - loss: 0.9135, acc: 99.13% / test_loss: 0.9218, test_acc: 98.33%
Epoch 261 - loss: 0.9145, acc: 99.05% / test_loss: 0.9230, test_acc: 98.16%
Epoch 262 - loss: 0.9134, acc: 99.15% / test_loss: 0.9229, test_acc: 98.21%
Epoch 263 - loss: 0.9146, acc: 99.03% / test_loss: 0.9267, test_acc: 97.81%
Epoch 264 - loss: 0.9150, acc: 98.99% / test_loss: 0.9225, test_acc: 98.24%
Epoch 265 - loss: 0.9142, acc: 99.06% / test_loss: 0.9244, test_acc: 98.02%
Epoch 266 - loss: 0.9153, acc: 98.96% / test_loss: 0.9229, test_acc: 98.20%
Epoch 267 - loss: 0.9141, acc: 99.08% / test_loss: 0.9241, test_acc: 98.07%
Epoch 268 - loss: 0.9136, acc: 99.14% / test_loss: 0.9234, test_acc: 98.14%
Epoch 269 - loss: 0.9145, acc: 99.04% / test_loss: 0.9229, test_acc: 98.20%
Epoch 270 - loss: 0.9144, acc: 99.03% / test_loss: 0.9234, test_acc: 98.10%
Epoch 271 - loss: 0.9151, acc: 98.98% / test_loss: 0.9228, test_acc: 98.23%
Epoch 272 - loss: 0.9144, acc: 99.05% / test_loss: 0.9226, test_acc: 98.24%
Epoch 273 - loss: 0.9145, acc: 99.03% / test_loss: 0.9228, test_acc: 98.20%
Epoch 274 - loss: 0.9134, acc: 99.15% / test_loss: 0.9215, test_acc: 98.34%
Epoch 275 - loss: 0.9127, acc: 99.22% / test_loss: 0.9237, test_acc: 98.09%
Epoch 276 - loss: 0.9141, acc: 99.09% / test_loss: 0.9222, test_acc: 98.29%
Epoch 277 - loss: 0.9125, acc: 99.24% / test_loss: 0.9224, test_acc: 98.23%
Epoch 278 - loss: 0.9145, acc: 99.03% / test_loss: 0.9233, test_acc: 98.14%
Epoch 279 - loss: 0.9163, acc: 98.84% / test_loss: 0.9229, test_acc: 98.20%
Epoch 280 - loss: 0.9141, acc: 99.10% / test_loss: 0.9236, test_acc: 98.11%
Epoch 281 - loss: 0.9138, acc: 99.11% / test_loss: 0.9225, test_acc: 98.22%
Epoch 282 - loss: 0.9138, acc: 99.12% / test_loss: 0.9218, test_acc: 98.29%
Epoch 283 - loss: 0.9133, acc: 99.17% / test_loss: 0.9242, test_acc: 98.07%
Epoch 284 - loss: 0.9143, acc: 99.04% / test_loss: 0.9230, test_acc: 98.18%
Epoch 285 - loss: 0.9144, acc: 99.04% / test_loss: 0.9221, test_acc: 98.27%
Epoch 286 - loss: 0.9137, acc: 99.12% / test_loss: 0.9292, test_acc: 97.60%
Epoch 287 - loss: 0.9138, acc: 99.11% / test_loss: 0.9249, test_acc: 98.04%
Epoch 288 - loss: 0.9138, acc: 99.11% / test_loss: 0.9268, test_acc: 97.82%
Epoch 289 - loss: 0.9132, acc: 99.18% / test_loss: 0.9233, test_acc: 98.13%
Epoch 290 - loss: 0.9135, acc: 99.12% / test_loss: 0.9228, test_acc: 98.21%
Epoch 291 - loss: 0.9129, acc: 99.20% / test_loss: 0.9226, test_acc: 98.25%
Epoch 292 - loss: 0.9135, acc: 99.15% / test_loss: 0.9240, test_acc: 98.05%
Epoch 293 - loss: 0.9128, acc: 99.23% / test_loss: 0.9233, test_acc: 98.14%
Epoch 294 - loss: 0.9162, acc: 98.88% / test_loss: 0.9252, test_acc: 97.95%
Epoch 295 - loss: 0.9149, acc: 99.00% / test_loss: 0.9230, test_acc: 98.20%
Epoch 296 - loss: 0.9141, acc: 99.07% / test_loss: 0.9225, test_acc: 98.20%
Epoch 297 - loss: 0.9133, acc: 99.15% / test_loss: 0.9230, test_acc: 98.21%
Epoch 298 - loss: 0.9128, acc: 99.20% / test_loss: 0.9223, test_acc: 98.23%
Epoch 299 - loss: 0.9124, acc: 99.25% / test_loss: 0.9227, test_acc: 98.23%
Epoch 300 - loss: 0.9126, acc: 99.22% / test_loss: 0.9219, test_acc: 98.28%
Epoch 301 - loss: 0.9126, acc: 99.23% / test_loss: 0.9221, test_acc: 98.28%
Epoch 302 - loss: 0.9146, acc: 99.00% / test_loss: 0.9222, test_acc: 98.25%
Epoch 303 - loss: 0.9142, acc: 99.07% / test_loss: 0.9218, test_acc: 98.30%
Epoch 304 - loss: 0.9141, acc: 99.08% / test_loss: 0.9213, test_acc: 98.35%
Epoch 305 - loss: 0.9134, acc: 99.16% / test_loss: 0.9214, test_acc: 98.33%
Epoch 306 - loss: 0.9130, acc: 99.18% / test_loss: 0.9225, test_acc: 98.23%
Epoch 307 - loss: 0.9141, acc: 99.08% / test_loss: 0.9226, test_acc: 98.22%
Epoch 308 - loss: 0.9136, acc: 99.13% / test_loss: 0.9229, test_acc: 98.18%
Epoch 309 - loss: 0.9140, acc: 99.06% / test_loss: 0.9223, test_acc: 98.27%
Epoch 310 - loss: 0.9131, acc: 99.17% / test_loss: 0.9278, test_acc: 97.67%
Epoch 311 - loss: 0.9135, acc: 99.12% / test_loss: 0.9224, test_acc: 98.26%
Epoch 312 - loss: 0.9148, acc: 99.02% / test_loss: 0.9233, test_acc: 98.12%
Epoch 313 - loss: 0.9143, acc: 99.06% / test_loss: 0.9230, test_acc: 98.15%
Epoch 314 - loss: 0.9131, acc: 99.20% / test_loss: 0.9239, test_acc: 98.08%
Epoch 315 - loss: 0.9142, acc: 99.06% / test_loss: 0.9245, test_acc: 98.02%
Epoch 316 - loss: 0.9128, acc: 99.21% / test_loss: 0.9223, test_acc: 98.27%
Epoch 317 - loss: 0.9135, acc: 99.12% / test_loss: 0.9220, test_acc: 98.27%
Epoch 318 - loss: 0.9140, acc: 99.10% / test_loss: 0.9229, test_acc: 98.20%
Epoch 319 - loss: 0.9139, acc: 99.09% / test_loss: 0.9226, test_acc: 98.20%
Epoch 320 - loss: 0.9132, acc: 99.17% / test_loss: 0.9222, test_acc: 98.26%
Epoch 321 - loss: 0.9139, acc: 99.09% / test_loss: 0.9227, test_acc: 98.18%
Epoch 322 - loss: 0.9134, acc: 99.13% / test_loss: 0.9219, test_acc: 98.29%
Epoch 323 - loss: 0.9141, acc: 99.06% / test_loss: 0.9240, test_acc: 98.07%
Epoch 324 - loss: 0.9138, acc: 99.08% / test_loss: 0.9219, test_acc: 98.30%
Epoch 325 - loss: 0.9141, acc: 99.06% / test_loss: 0.9224, test_acc: 98.26%
Epoch 326 - loss: 0.9127, acc: 99.21% / test_loss: 0.9219, test_acc: 98.29%
Epoch 327 - loss: 0.9127, acc: 99.22% / test_loss: 0.9233, test_acc: 98.14%
Epoch 328 - loss: 0.9131, acc: 99.18% / test_loss: 0.9212, test_acc: 98.36%
Epoch 329 - loss: 0.9130, acc: 99.18% / test_loss: 0.9225, test_acc: 98.21%
Epoch 330 - loss: 0.9123, acc: 99.26% / test_loss: 0.9216, test_acc: 98.32%
Epoch 331 - loss: 0.9128, acc: 99.21% / test_loss: 0.9238, test_acc: 98.09%
Epoch 332 - loss: 0.9131, acc: 99.18% / test_loss: 0.9217, test_acc: 98.32%
Epoch 333 - loss: 0.9146, acc: 99.03% / test_loss: 0.9226, test_acc: 98.22%
Epoch 334 - loss: 0.9153, acc: 98.95% / test_loss: 0.9242, test_acc: 98.04%
Epoch 335 - loss: 0.9140, acc: 99.09% / test_loss: 0.9248, test_acc: 97.98%
Epoch 336 - loss: 0.9139, acc: 99.08% / test_loss: 0.9218, test_acc: 98.29%
Epoch 337 - loss: 0.9123, acc: 99.24% / test_loss: 0.9208, test_acc: 98.39%
Epoch 338 - loss: 0.9130, acc: 99.18% / test_loss: 0.9218, test_acc: 98.29%
Epoch 339 - loss: 0.9122, acc: 99.27% / test_loss: 0.9259, test_acc: 97.90%
Epoch 340 - loss: 0.9124, acc: 99.25% / test_loss: 0.9221, test_acc: 98.24%
Epoch 341 - loss: 0.9124, acc: 99.25% / test_loss: 0.9231, test_acc: 98.17%
Epoch 342 - loss: 0.9131, acc: 99.17% / test_loss: 0.9220, test_acc: 98.26%
Epoch 343 - loss: 0.9133, acc: 99.16% / test_loss: 0.9229, test_acc: 98.20%
Epoch 344 - loss: 0.9128, acc: 99.21% / test_loss: 0.9218, test_acc: 98.29%
Epoch 345 - loss: 0.9135, acc: 99.14% / test_loss: 0.9249, test_acc: 98.02%
Epoch 346 - loss: 0.9133, acc: 99.17% / test_loss: 0.9226, test_acc: 98.23%
Epoch 347 - loss: 0.9128, acc: 99.22% / test_loss: 0.9219, test_acc: 98.28%
Epoch 348 - loss: 0.9122, acc: 99.26% / test_loss: 0.9227, test_acc: 98.21%
Epoch 349 - loss: 0.9138, acc: 99.10% / test_loss: 0.9215, test_acc: 98.32%
Epoch 350 - loss: 0.9127, acc: 99.21% / test_loss: 0.9235, test_acc: 98.13%
Epoch 351 - loss: 0.9143, acc: 99.05% / test_loss: 0.9222, test_acc: 98.24%
Epoch 352 - loss: 0.9141, acc: 99.07% / test_loss: 0.9222, test_acc: 98.25%
Epoch 353 - loss: 0.9126, acc: 99.22% / test_loss: 0.9202, test_acc: 98.47%
Epoch 354 - loss: 0.9125, acc: 99.24% / test_loss: 0.9204, test_acc: 98.44%
Epoch 355 - loss: 0.9122, acc: 99.27% / test_loss: 0.9208, test_acc: 98.41%
Epoch 356 - loss: 0.9133, acc: 99.16% / test_loss: 0.9231, test_acc: 98.17%
Epoch 357 - loss: 0.9146, acc: 99.02% / test_loss: 0.9233, test_acc: 98.14%
Epoch 358 - loss: 0.9133, acc: 99.16% / test_loss: 0.9206, test_acc: 98.42%
Epoch 359 - loss: 0.9123, acc: 99.24% / test_loss: 0.9228, test_acc: 98.22%
Epoch 360 - loss: 0.9128, acc: 99.20% / test_loss: 0.9215, test_acc: 98.35%
Epoch 361 - loss: 0.9124, acc: 99.24% / test_loss: 0.9210, test_acc: 98.38%
Epoch 362 - loss: 0.9124, acc: 99.24% / test_loss: 0.9212, test_acc: 98.36%
Epoch 363 - loss: 0.9133, acc: 99.15% / test_loss: 0.9229, test_acc: 98.18%
Epoch 364 - loss: 0.9139, acc: 99.09% / test_loss: 0.9213, test_acc: 98.36%
Epoch 365 - loss: 0.9132, acc: 99.15% / test_loss: 0.9214, test_acc: 98.35%
Epoch 366 - loss: 0.9128, acc: 99.19% / test_loss: 0.9229, test_acc: 98.19%
Epoch 367 - loss: 0.9122, acc: 99.27% / test_loss: 0.9216, test_acc: 98.32%
Epoch 368 - loss: 0.9142, acc: 99.06% / test_loss: 0.9222, test_acc: 98.24%
Epoch 369 - loss: 0.9133, acc: 99.14% / test_loss: 0.9222, test_acc: 98.24%
Epoch 370 - loss: 0.9126, acc: 99.22% / test_loss: 0.9215, test_acc: 98.31%
Epoch 371 - loss: 0.9122, acc: 99.27% / test_loss: 0.9212, test_acc: 98.37%
Epoch 372 - loss: 0.9132, acc: 99.14% / test_loss: 0.9226, test_acc: 98.20%
Epoch 373 - loss: 0.9125, acc: 99.22% / test_loss: 0.9216, test_acc: 98.29%
Epoch 374 - loss: 0.9131, acc: 99.18% / test_loss: 0.9231, test_acc: 98.17%
Epoch 375 - loss: 0.9128, acc: 99.21% / test_loss: 0.9229, test_acc: 98.19%
Epoch 376 - loss: 0.9142, acc: 99.07% / test_loss: 0.9214, test_acc: 98.32%
Epoch 377 - loss: 0.9120, acc: 99.29% / test_loss: 0.9215, test_acc: 98.33%
Epoch 378 - loss: 0.9119, acc: 99.30% / test_loss: 0.9213, test_acc: 98.35%
Epoch 379 - loss: 0.9120, acc: 99.29% / test_loss: 0.9252, test_acc: 97.95%
Epoch 380 - loss: 0.9139, acc: 99.10% / test_loss: 0.9264, test_acc: 97.88%
Epoch 381 - loss: 0.9144, acc: 99.06% / test_loss: 0.9271, test_acc: 97.73%
Epoch 382 - loss: 0.9142, acc: 99.09% / test_loss: 0.9222, test_acc: 98.23%
Epoch 383 - loss: 0.9125, acc: 99.23% / test_loss: 0.9243, test_acc: 98.04%
Epoch 384 - loss: 0.9137, acc: 99.12% / test_loss: 0.9224, test_acc: 98.26%
Epoch 385 - loss: 0.9127, acc: 99.21% / test_loss: 0.9214, test_acc: 98.33%
Epoch 386 - loss: 0.9121, acc: 99.28% / test_loss: 0.9210, test_acc: 98.39%
Epoch 387 - loss: 0.9133, acc: 99.15% / test_loss: 0.9221, test_acc: 98.25%
Epoch 388 - loss: 0.9125, acc: 99.24% / test_loss: 0.9236, test_acc: 98.12%
Epoch 389 - loss: 0.9125, acc: 99.23% / test_loss: 0.9217, test_acc: 98.32%
Epoch 390 - loss: 0.9126, acc: 99.23% / test_loss: 0.9231, test_acc: 98.17%
Epoch 391 - loss: 0.9148, acc: 99.00% / test_loss: 0.9216, test_acc: 98.31%
Epoch 392 - loss: 0.9143, acc: 99.06% / test_loss: 0.9220, test_acc: 98.26%
Epoch 393 - loss: 0.9126, acc: 99.24% / test_loss: 0.9249, test_acc: 97.99%
Epoch 394 - loss: 0.9138, acc: 99.09% / test_loss: 0.9238, test_acc: 98.10%
Epoch 395 - loss: 0.9125, acc: 99.24% / test_loss: 0.9217, test_acc: 98.34%
Epoch 396 - loss: 0.9128, acc: 99.20% / test_loss: 0.9220, test_acc: 98.26%
Epoch 397 - loss: 0.9124, acc: 99.24% / test_loss: 0.9233, test_acc: 98.14%
Epoch 398 - loss: 0.9125, acc: 99.24% / test_loss: 0.9214, test_acc: 98.35%
Epoch 399 - loss: 0.9120, acc: 99.28% / test_loss: 0.9236, test_acc: 98.13%
Epoch 400 - loss: 0.9145, acc: 99.02% / test_loss: 0.9262, test_acc: 97.85%
Best test accuracy 98.47% in epoch 353.
----------------------------------------------------------------------------------------------------
Run 3
Epoch 1 - loss: 1.3517, acc: 56.00% / test_loss: 1.1935, test_acc: 71.15%
Epoch 2 - loss: 1.1647, acc: 74.74% / test_loss: 1.1107, test_acc: 79.91%
Epoch 3 - loss: 1.1051, acc: 80.39% / test_loss: 1.0854, test_acc: 82.13%
Epoch 4 - loss: 1.0872, acc: 81.94% / test_loss: 1.0485, test_acc: 86.53%
Epoch 5 - loss: 1.0449, acc: 86.36% / test_loss: 1.0433, test_acc: 86.27%
Epoch 6 - loss: 1.0351, acc: 87.11% / test_loss: 1.0256, test_acc: 88.18%
Epoch 7 - loss: 1.0303, acc: 87.51% / test_loss: 1.0206, test_acc: 88.44%
Epoch 8 - loss: 1.0269, acc: 87.78% / test_loss: 1.0137, test_acc: 89.04%
Epoch 9 - loss: 1.0237, acc: 88.12% / test_loss: 1.0146, test_acc: 89.13%
Epoch 10 - loss: 1.0217, acc: 88.25% / test_loss: 1.0112, test_acc: 89.19%
Epoch 11 - loss: 1.0194, acc: 88.44% / test_loss: 1.0104, test_acc: 89.28%
Epoch 12 - loss: 1.0194, acc: 88.43% / test_loss: 1.0121, test_acc: 89.14%
Epoch 13 - loss: 1.0182, acc: 88.49% / test_loss: 1.0102, test_acc: 89.25%
Epoch 14 - loss: 1.0161, acc: 88.77% / test_loss: 1.0173, test_acc: 88.81%
Epoch 15 - loss: 1.0155, acc: 88.83% / test_loss: 1.0093, test_acc: 89.42%
Epoch 16 - loss: 1.0140, acc: 88.91% / test_loss: 1.0057, test_acc: 89.73%
Epoch 17 - loss: 1.0125, acc: 89.08% / test_loss: 1.0064, test_acc: 89.63%
Epoch 18 - loss: 1.0137, acc: 88.98% / test_loss: 1.0067, test_acc: 89.68%
Epoch 19 - loss: 1.0126, acc: 89.05% / test_loss: 1.0049, test_acc: 89.73%
Epoch 20 - loss: 1.0106, acc: 89.21% / test_loss: 1.0070, test_acc: 89.56%
Epoch 21 - loss: 1.0090, acc: 89.40% / test_loss: 1.0059, test_acc: 89.77%
Epoch 22 - loss: 1.0083, acc: 89.48% / test_loss: 1.0043, test_acc: 89.90%
Epoch 23 - loss: 1.0061, acc: 89.67% / test_loss: 1.0050, test_acc: 89.84%
Epoch 24 - loss: 1.0062, acc: 89.69% / test_loss: 1.0048, test_acc: 90.07%
Epoch 25 - loss: 1.0056, acc: 89.78% / test_loss: 1.0065, test_acc: 89.82%
Epoch 26 - loss: 1.0051, acc: 89.85% / test_loss: 1.0011, test_acc: 90.17%
Epoch 27 - loss: 1.0031, acc: 90.03% / test_loss: 1.0000, test_acc: 90.37%
Epoch 28 - loss: 1.0030, acc: 90.03% / test_loss: 1.0003, test_acc: 90.24%
Epoch 29 - loss: 1.0034, acc: 89.96% / test_loss: 0.9998, test_acc: 90.31%
Epoch 30 - loss: 1.0021, acc: 90.03% / test_loss: 1.0021, test_acc: 90.15%
Epoch 31 - loss: 1.0010, acc: 90.20% / test_loss: 1.0003, test_acc: 90.31%
Epoch 32 - loss: 1.0012, acc: 90.13% / test_loss: 0.9984, test_acc: 90.46%
Epoch 33 - loss: 0.9987, acc: 90.43% / test_loss: 0.9978, test_acc: 90.44%
Epoch 34 - loss: 0.9994, acc: 90.35% / test_loss: 0.9986, test_acc: 90.34%
Epoch 35 - loss: 0.9996, acc: 90.28% / test_loss: 1.0004, test_acc: 90.23%
Epoch 36 - loss: 0.9989, acc: 90.45% / test_loss: 0.9948, test_acc: 90.73%
Epoch 37 - loss: 0.9942, acc: 90.86% / test_loss: 0.9982, test_acc: 90.51%
Epoch 38 - loss: 0.9903, acc: 91.29% / test_loss: 0.9892, test_acc: 91.32%
Epoch 39 - loss: 0.9892, acc: 91.38% / test_loss: 0.9887, test_acc: 91.40%
Epoch 40 - loss: 0.9868, acc: 91.59% / test_loss: 0.9838, test_acc: 91.89%
Epoch 41 - loss: 0.9822, acc: 92.07% / test_loss: 0.9821, test_acc: 91.99%
Epoch 42 - loss: 0.9797, acc: 92.26% / test_loss: 0.9812, test_acc: 92.07%
Epoch 43 - loss: 0.9804, acc: 92.21% / test_loss: 0.9818, test_acc: 92.01%
Epoch 44 - loss: 0.9784, acc: 92.36% / test_loss: 0.9846, test_acc: 91.71%
Epoch 45 - loss: 0.9777, acc: 92.48% / test_loss: 0.9810, test_acc: 92.13%
Epoch 46 - loss: 0.9795, acc: 92.36% / test_loss: 0.9839, test_acc: 91.88%
Epoch 47 - loss: 0.9772, acc: 92.50% / test_loss: 0.9789, test_acc: 92.27%
Epoch 48 - loss: 0.9769, acc: 92.51% / test_loss: 0.9795, test_acc: 92.28%
Epoch 49 - loss: 0.9760, acc: 92.70% / test_loss: 0.9790, test_acc: 92.32%
Epoch 50 - loss: 0.9753, acc: 92.71% / test_loss: 0.9781, test_acc: 92.42%
Epoch 51 - loss: 0.9764, acc: 92.56% / test_loss: 0.9812, test_acc: 92.22%
Epoch 52 - loss: 0.9750, acc: 92.75% / test_loss: 0.9802, test_acc: 92.22%
Epoch 53 - loss: 0.9750, acc: 92.71% / test_loss: 0.9785, test_acc: 92.34%
Epoch 54 - loss: 0.9759, acc: 92.65% / test_loss: 0.9760, test_acc: 92.61%
Epoch 55 - loss: 0.9731, acc: 92.92% / test_loss: 0.9763, test_acc: 92.58%
Epoch 56 - loss: 0.9748, acc: 92.75% / test_loss: 0.9784, test_acc: 92.37%
Epoch 57 - loss: 0.9729, acc: 92.94% / test_loss: 0.9771, test_acc: 92.50%
Epoch 58 - loss: 0.9728, acc: 92.93% / test_loss: 0.9756, test_acc: 92.61%
Epoch 59 - loss: 0.9719, acc: 93.02% / test_loss: 0.9750, test_acc: 92.72%
Epoch 60 - loss: 0.9716, acc: 93.05% / test_loss: 0.9767, test_acc: 92.65%
Epoch 61 - loss: 0.9719, acc: 93.02% / test_loss: 0.9760, test_acc: 92.62%
Epoch 62 - loss: 0.9723, acc: 92.95% / test_loss: 0.9747, test_acc: 92.74%
Epoch 63 - loss: 0.9722, acc: 92.96% / test_loss: 0.9799, test_acc: 92.34%
Epoch 64 - loss: 0.9718, acc: 93.03% / test_loss: 0.9749, test_acc: 92.70%
Epoch 65 - loss: 0.9721, acc: 93.02% / test_loss: 0.9754, test_acc: 92.68%
Epoch 66 - loss: 0.9704, acc: 93.15% / test_loss: 0.9755, test_acc: 92.60%
Epoch 67 - loss: 0.9716, acc: 93.02% / test_loss: 0.9751, test_acc: 92.69%
Epoch 68 - loss: 0.9700, acc: 93.16% / test_loss: 0.9744, test_acc: 92.72%
Epoch 69 - loss: 0.9705, acc: 93.11% / test_loss: 0.9754, test_acc: 92.64%
Epoch 70 - loss: 0.9714, acc: 93.06% / test_loss: 0.9725, test_acc: 92.89%
Epoch 71 - loss: 0.9703, acc: 93.13% / test_loss: 0.9732, test_acc: 92.86%
Epoch 72 - loss: 0.9696, acc: 93.23% / test_loss: 0.9817, test_acc: 92.01%
Epoch 73 - loss: 0.9707, acc: 93.11% / test_loss: 0.9734, test_acc: 92.80%
Epoch 74 - loss: 0.9705, acc: 93.14% / test_loss: 0.9759, test_acc: 92.63%
Epoch 75 - loss: 0.9710, acc: 93.10% / test_loss: 0.9734, test_acc: 92.81%
Epoch 76 - loss: 0.9688, acc: 93.28% / test_loss: 0.9723, test_acc: 92.96%
Epoch 77 - loss: 0.9703, acc: 93.12% / test_loss: 0.9750, test_acc: 92.62%
Epoch 78 - loss: 0.9473, acc: 95.63% / test_loss: 0.9387, test_acc: 96.71%
Epoch 79 - loss: 0.9275, acc: 97.82% / test_loss: 0.9339, test_acc: 97.11%
Epoch 80 - loss: 0.9256, acc: 97.96% / test_loss: 0.9311, test_acc: 97.41%
Epoch 81 - loss: 0.9263, acc: 97.89% / test_loss: 0.9312, test_acc: 97.38%
Epoch 82 - loss: 0.9270, acc: 97.82% / test_loss: 0.9313, test_acc: 97.35%
Epoch 83 - loss: 0.9276, acc: 97.78% / test_loss: 0.9316, test_acc: 97.30%
Epoch 84 - loss: 0.9250, acc: 98.01% / test_loss: 0.9337, test_acc: 97.05%
Epoch 85 - loss: 0.9232, acc: 98.18% / test_loss: 0.9301, test_acc: 97.46%
Epoch 86 - loss: 0.9246, acc: 98.04% / test_loss: 0.9308, test_acc: 97.41%
Epoch 87 - loss: 0.9249, acc: 98.02% / test_loss: 0.9339, test_acc: 97.09%
Epoch 88 - loss: 0.9264, acc: 97.87% / test_loss: 0.9301, test_acc: 97.46%
Epoch 89 - loss: 0.9254, acc: 97.97% / test_loss: 0.9323, test_acc: 97.27%
Epoch 90 - loss: 0.9244, acc: 98.07% / test_loss: 0.9327, test_acc: 97.19%
Epoch 91 - loss: 0.9251, acc: 98.00% / test_loss: 0.9290, test_acc: 97.58%
Epoch 92 - loss: 0.9239, acc: 98.10% / test_loss: 0.9293, test_acc: 97.52%
Epoch 93 - loss: 0.9240, acc: 98.14% / test_loss: 0.9315, test_acc: 97.33%
Epoch 94 - loss: 0.9243, acc: 98.04% / test_loss: 0.9331, test_acc: 97.18%
Epoch 95 - loss: 0.9249, acc: 98.01% / test_loss: 0.9292, test_acc: 97.54%
Epoch 96 - loss: 0.9255, acc: 97.96% / test_loss: 0.9322, test_acc: 97.24%
Epoch 97 - loss: 0.9249, acc: 97.98% / test_loss: 0.9296, test_acc: 97.55%
Epoch 98 - loss: 0.9213, acc: 98.36% / test_loss: 0.9275, test_acc: 97.74%
Epoch 99 - loss: 0.9206, acc: 98.45% / test_loss: 0.9298, test_acc: 97.51%
Epoch 100 - loss: 0.9207, acc: 98.45% / test_loss: 0.9284, test_acc: 97.59%
Epoch 101 - loss: 0.9189, acc: 98.64% / test_loss: 0.9270, test_acc: 97.83%
Epoch 102 - loss: 0.9234, acc: 98.10% / test_loss: 0.9309, test_acc: 97.37%
Epoch 103 - loss: 0.9195, acc: 98.54% / test_loss: 0.9269, test_acc: 97.79%
Epoch 104 - loss: 0.9208, acc: 98.43% / test_loss: 0.9264, test_acc: 97.82%
Epoch 105 - loss: 0.9216, acc: 98.33% / test_loss: 0.9275, test_acc: 97.73%
Epoch 106 - loss: 0.9203, acc: 98.44% / test_loss: 0.9273, test_acc: 97.80%
Epoch 107 - loss: 0.9190, acc: 98.60% / test_loss: 0.9251, test_acc: 97.95%
Epoch 108 - loss: 0.9194, acc: 98.57% / test_loss: 0.9262, test_acc: 97.88%
Epoch 109 - loss: 0.9197, acc: 98.51% / test_loss: 0.9253, test_acc: 97.98%
Epoch 110 - loss: 0.9201, acc: 98.51% / test_loss: 0.9321, test_acc: 97.27%
Epoch 111 - loss: 0.9195, acc: 98.56% / test_loss: 0.9281, test_acc: 97.67%
Epoch 112 - loss: 0.9203, acc: 98.49% / test_loss: 0.9276, test_acc: 97.80%
Epoch 113 - loss: 0.9208, acc: 98.41% / test_loss: 0.9309, test_acc: 97.41%
Epoch 114 - loss: 0.9206, acc: 98.41% / test_loss: 0.9268, test_acc: 97.80%
Epoch 115 - loss: 0.9192, acc: 98.57% / test_loss: 0.9258, test_acc: 97.90%
Epoch 116 - loss: 0.9200, acc: 98.47% / test_loss: 0.9429, test_acc: 96.19%
Epoch 117 - loss: 0.9197, acc: 98.54% / test_loss: 0.9276, test_acc: 97.72%
Epoch 118 - loss: 0.9191, acc: 98.60% / test_loss: 0.9270, test_acc: 97.78%
Epoch 119 - loss: 0.9198, acc: 98.51% / test_loss: 0.9265, test_acc: 97.83%
Epoch 120 - loss: 0.9186, acc: 98.65% / test_loss: 0.9258, test_acc: 97.88%
Epoch 121 - loss: 0.9190, acc: 98.60% / test_loss: 0.9290, test_acc: 97.59%
Epoch 122 - loss: 0.9182, acc: 98.64% / test_loss: 0.9255, test_acc: 97.93%
Epoch 123 - loss: 0.9174, acc: 98.76% / test_loss: 0.9245, test_acc: 98.01%
Epoch 124 - loss: 0.9176, acc: 98.73% / test_loss: 0.9259, test_acc: 97.89%
Epoch 125 - loss: 0.9179, acc: 98.71% / test_loss: 0.9267, test_acc: 97.81%
Epoch 126 - loss: 0.9186, acc: 98.67% / test_loss: 0.9288, test_acc: 97.60%
Epoch 127 - loss: 0.9177, acc: 98.73% / test_loss: 0.9254, test_acc: 97.94%
Epoch 128 - loss: 0.9183, acc: 98.66% / test_loss: 0.9249, test_acc: 97.98%
Epoch 129 - loss: 0.9182, acc: 98.66% / test_loss: 0.9241, test_acc: 98.10%
Epoch 130 - loss: 0.9183, acc: 98.66% / test_loss: 0.9268, test_acc: 97.83%
Epoch 131 - loss: 0.9177, acc: 98.73% / test_loss: 0.9282, test_acc: 97.64%
Epoch 132 - loss: 0.9175, acc: 98.74% / test_loss: 0.9254, test_acc: 97.92%
Epoch 133 - loss: 0.9182, acc: 98.67% / test_loss: 0.9261, test_acc: 97.85%
Epoch 134 - loss: 0.9181, acc: 98.72% / test_loss: 0.9261, test_acc: 97.93%
Epoch 135 - loss: 0.9181, acc: 98.68% / test_loss: 0.9269, test_acc: 97.82%
Epoch 136 - loss: 0.9178, acc: 98.72% / test_loss: 0.9250, test_acc: 97.93%
Epoch 137 - loss: 0.9164, acc: 98.85% / test_loss: 0.9244, test_acc: 98.01%
Epoch 138 - loss: 0.9180, acc: 98.69% / test_loss: 0.9259, test_acc: 97.88%
Epoch 139 - loss: 0.9167, acc: 98.81% / test_loss: 0.9254, test_acc: 97.92%
Epoch 140 - loss: 0.9189, acc: 98.64% / test_loss: 0.9290, test_acc: 97.56%
Epoch 141 - loss: 0.9181, acc: 98.69% / test_loss: 0.9271, test_acc: 97.77%
Epoch 142 - loss: 0.9198, acc: 98.48% / test_loss: 0.9237, test_acc: 98.10%
Epoch 143 - loss: 0.9170, acc: 98.81% / test_loss: 0.9269, test_acc: 97.80%
Epoch 144 - loss: 0.9179, acc: 98.72% / test_loss: 0.9248, test_acc: 98.02%
Epoch 145 - loss: 0.9170, acc: 98.80% / test_loss: 0.9257, test_acc: 97.91%
Epoch 146 - loss: 0.9175, acc: 98.76% / test_loss: 0.9268, test_acc: 97.81%
Epoch 147 - loss: 0.9175, acc: 98.77% / test_loss: 0.9256, test_acc: 97.94%
Epoch 148 - loss: 0.9166, acc: 98.84% / test_loss: 0.9237, test_acc: 98.10%
Epoch 149 - loss: 0.9167, acc: 98.82% / test_loss: 0.9244, test_acc: 98.03%
Epoch 150 - loss: 0.9160, acc: 98.88% / test_loss: 0.9240, test_acc: 98.08%
Epoch 151 - loss: 0.9166, acc: 98.82% / test_loss: 0.9250, test_acc: 97.99%
Epoch 152 - loss: 0.9180, acc: 98.69% / test_loss: 0.9248, test_acc: 98.00%
Epoch 153 - loss: 0.9171, acc: 98.78% / test_loss: 0.9235, test_acc: 98.14%
Epoch 154 - loss: 0.9170, acc: 98.78% / test_loss: 0.9235, test_acc: 98.09%
Epoch 155 - loss: 0.9169, acc: 98.78% / test_loss: 0.9295, test_acc: 97.54%
Epoch 156 - loss: 0.9190, acc: 98.60% / test_loss: 0.9281, test_acc: 97.68%
Epoch 157 - loss: 0.9174, acc: 98.74% / test_loss: 0.9252, test_acc: 97.93%
Epoch 158 - loss: 0.9165, acc: 98.86% / test_loss: 0.9256, test_acc: 97.90%
Epoch 159 - loss: 0.9172, acc: 98.75% / test_loss: 0.9259, test_acc: 97.88%
Epoch 160 - loss: 0.9167, acc: 98.83% / test_loss: 0.9258, test_acc: 97.94%
Epoch 161 - loss: 0.9178, acc: 98.71% / test_loss: 0.9249, test_acc: 97.99%
Epoch 162 - loss: 0.9160, acc: 98.90% / test_loss: 0.9258, test_acc: 97.93%
Epoch 163 - loss: 0.9164, acc: 98.84% / test_loss: 0.9249, test_acc: 97.98%
Epoch 164 - loss: 0.9165, acc: 98.84% / test_loss: 0.9244, test_acc: 98.02%
Epoch 165 - loss: 0.9171, acc: 98.76% / test_loss: 0.9242, test_acc: 98.08%
Epoch 166 - loss: 0.9166, acc: 98.84% / test_loss: 0.9257, test_acc: 97.91%
Epoch 167 - loss: 0.9160, acc: 98.91% / test_loss: 0.9237, test_acc: 98.10%
Epoch 168 - loss: 0.9159, acc: 98.88% / test_loss: 0.9240, test_acc: 98.11%
Epoch 169 - loss: 0.9173, acc: 98.78% / test_loss: 0.9261, test_acc: 97.87%
Epoch 170 - loss: 0.9172, acc: 98.76% / test_loss: 0.9252, test_acc: 97.92%
Epoch 171 - loss: 0.9157, acc: 98.92% / test_loss: 0.9232, test_acc: 98.16%
Epoch 172 - loss: 0.9159, acc: 98.89% / test_loss: 0.9239, test_acc: 98.05%
Epoch 173 - loss: 0.9152, acc: 98.96% / test_loss: 0.9245, test_acc: 98.01%
Epoch 174 - loss: 0.9177, acc: 98.73% / test_loss: 0.9280, test_acc: 97.67%
Epoch 175 - loss: 0.9171, acc: 98.77% / test_loss: 0.9258, test_acc: 97.95%
Epoch 176 - loss: 0.9177, acc: 98.72% / test_loss: 0.9306, test_acc: 97.42%
Epoch 177 - loss: 0.9172, acc: 98.75% / test_loss: 0.9248, test_acc: 98.02%
Epoch 178 - loss: 0.9173, acc: 98.75% / test_loss: 0.9259, test_acc: 97.89%
Epoch 179 - loss: 0.9159, acc: 98.90% / test_loss: 0.9256, test_acc: 97.91%
Epoch 180 - loss: 0.9161, acc: 98.89% / test_loss: 0.9242, test_acc: 98.07%
Epoch 181 - loss: 0.9151, acc: 98.97% / test_loss: 0.9231, test_acc: 98.17%
Epoch 182 - loss: 0.9147, acc: 99.03% / test_loss: 0.9246, test_acc: 98.01%
Epoch 183 - loss: 0.9154, acc: 98.96% / test_loss: 0.9328, test_acc: 97.21%
Epoch 184 - loss: 0.9167, acc: 98.81% / test_loss: 0.9248, test_acc: 97.99%
Epoch 185 - loss: 0.9175, acc: 98.72% / test_loss: 0.9246, test_acc: 97.99%
Epoch 186 - loss: 0.9161, acc: 98.88% / test_loss: 0.9246, test_acc: 97.98%
Epoch 187 - loss: 0.9166, acc: 98.83% / test_loss: 0.9255, test_acc: 97.89%
Epoch 188 - loss: 0.9153, acc: 98.97% / test_loss: 0.9252, test_acc: 97.96%
Epoch 189 - loss: 0.9168, acc: 98.82% / test_loss: 0.9244, test_acc: 98.04%
Epoch 190 - loss: 0.9172, acc: 98.74% / test_loss: 0.9275, test_acc: 97.76%
Epoch 191 - loss: 0.9156, acc: 98.94% / test_loss: 0.9236, test_acc: 98.13%
Epoch 192 - loss: 0.9156, acc: 98.93% / test_loss: 0.9237, test_acc: 98.11%
Epoch 193 - loss: 0.9162, acc: 98.85% / test_loss: 0.9283, test_acc: 97.61%
Epoch 194 - loss: 0.9182, acc: 98.66% / test_loss: 0.9276, test_acc: 97.68%
Epoch 195 - loss: 0.9167, acc: 98.83% / test_loss: 0.9235, test_acc: 98.13%
Epoch 196 - loss: 0.9148, acc: 99.00% / test_loss: 0.9229, test_acc: 98.21%
Epoch 197 - loss: 0.9167, acc: 98.81% / test_loss: 0.9301, test_acc: 97.46%
Epoch 198 - loss: 0.9159, acc: 98.91% / test_loss: 0.9263, test_acc: 97.86%
Epoch 199 - loss: 0.9159, acc: 98.90% / test_loss: 0.9239, test_acc: 98.06%
Epoch 200 - loss: 0.9156, acc: 98.94% / test_loss: 0.9224, test_acc: 98.24%
Epoch 201 - loss: 0.9157, acc: 98.94% / test_loss: 0.9266, test_acc: 97.79%
Epoch 202 - loss: 0.9172, acc: 98.75% / test_loss: 0.9257, test_acc: 97.92%
Epoch 203 - loss: 0.9164, acc: 98.85% / test_loss: 0.9241, test_acc: 98.10%
Epoch 204 - loss: 0.9157, acc: 98.93% / test_loss: 0.9237, test_acc: 98.09%
Epoch 205 - loss: 0.9159, acc: 98.89% / test_loss: 0.9236, test_acc: 98.14%
Epoch 206 - loss: 0.9157, acc: 98.92% / test_loss: 0.9245, test_acc: 98.01%
Epoch 207 - loss: 0.9152, acc: 98.97% / test_loss: 0.9246, test_acc: 98.02%
Epoch 208 - loss: 0.9186, acc: 98.61% / test_loss: 0.9411, test_acc: 96.40%
Epoch 209 - loss: 0.9173, acc: 98.76% / test_loss: 0.9238, test_acc: 98.11%
Epoch 210 - loss: 0.9158, acc: 98.91% / test_loss: 0.9228, test_acc: 98.23%
Epoch 211 - loss: 0.9155, acc: 98.94% / test_loss: 0.9248, test_acc: 97.98%
Epoch 212 - loss: 0.9165, acc: 98.84% / test_loss: 0.9238, test_acc: 98.08%
Epoch 213 - loss: 0.9172, acc: 98.78% / test_loss: 0.9248, test_acc: 98.01%
Epoch 214 - loss: 0.9155, acc: 98.94% / test_loss: 0.9246, test_acc: 98.02%
Epoch 215 - loss: 0.9157, acc: 98.91% / test_loss: 0.9252, test_acc: 97.95%
Epoch 216 - loss: 0.9145, acc: 99.05% / test_loss: 0.9234, test_acc: 98.13%
Epoch 217 - loss: 0.9149, acc: 98.99% / test_loss: 0.9237, test_acc: 98.09%
Epoch 218 - loss: 0.9157, acc: 98.91% / test_loss: 0.9254, test_acc: 97.92%
Epoch 219 - loss: 0.9151, acc: 98.98% / test_loss: 0.9226, test_acc: 98.23%
Epoch 220 - loss: 0.9146, acc: 99.02% / test_loss: 0.9234, test_acc: 98.14%
Epoch 221 - loss: 0.9143, acc: 99.06% / test_loss: 0.9231, test_acc: 98.17%
Epoch 222 - loss: 0.9149, acc: 99.00% / test_loss: 0.9232, test_acc: 98.18%
Epoch 223 - loss: 0.9168, acc: 98.78% / test_loss: 0.9263, test_acc: 97.85%
Epoch 224 - loss: 0.9176, acc: 98.72% / test_loss: 0.9234, test_acc: 98.14%
Epoch 225 - loss: 0.9161, acc: 98.89% / test_loss: 0.9239, test_acc: 98.07%
Epoch 226 - loss: 0.9154, acc: 98.95% / test_loss: 0.9237, test_acc: 98.10%
Epoch 227 - loss: 0.9152, acc: 98.98% / test_loss: 0.9246, test_acc: 98.01%
Epoch 228 - loss: 0.9158, acc: 98.90% / test_loss: 0.9244, test_acc: 98.10%
Epoch 229 - loss: 0.9155, acc: 98.94% / test_loss: 0.9242, test_acc: 98.04%
Epoch 230 - loss: 0.9148, acc: 99.03% / test_loss: 0.9242, test_acc: 98.05%
Epoch 231 - loss: 0.9161, acc: 98.87% / test_loss: 0.9268, test_acc: 97.80%
Epoch 232 - loss: 0.9145, acc: 99.05% / test_loss: 0.9220, test_acc: 98.27%
Epoch 233 - loss: 0.9139, acc: 99.09% / test_loss: 0.9222, test_acc: 98.24%
Epoch 234 - loss: 0.9143, acc: 99.06% / test_loss: 0.9231, test_acc: 98.16%
Epoch 235 - loss: 0.9155, acc: 98.93% / test_loss: 0.9265, test_acc: 97.83%
Epoch 236 - loss: 0.9171, acc: 98.77% / test_loss: 0.9223, test_acc: 98.23%
Epoch 237 - loss: 0.9154, acc: 98.96% / test_loss: 0.9227, test_acc: 98.20%
Epoch 238 - loss: 0.9146, acc: 99.03% / test_loss: 0.9225, test_acc: 98.21%
Epoch 239 - loss: 0.9146, acc: 99.03% / test_loss: 0.9235, test_acc: 98.13%
Epoch 240 - loss: 0.9142, acc: 99.06% / test_loss: 0.9223, test_acc: 98.26%
Epoch 241 - loss: 0.9163, acc: 98.85% / test_loss: 0.9264, test_acc: 97.80%
Epoch 242 - loss: 0.9155, acc: 98.94% / test_loss: 0.9235, test_acc: 98.14%
Epoch 243 - loss: 0.9160, acc: 98.88% / test_loss: 0.9223, test_acc: 98.26%
Epoch 244 - loss: 0.9162, acc: 98.88% / test_loss: 0.9272, test_acc: 97.73%
Epoch 245 - loss: 0.9154, acc: 98.97% / test_loss: 0.9262, test_acc: 97.81%
Epoch 246 - loss: 0.9162, acc: 98.85% / test_loss: 0.9230, test_acc: 98.18%
Epoch 247 - loss: 0.9153, acc: 98.94% / test_loss: 0.9241, test_acc: 98.08%
Epoch 248 - loss: 0.9155, acc: 98.94% / test_loss: 0.9229, test_acc: 98.17%
Epoch 249 - loss: 0.9143, acc: 99.06% / test_loss: 0.9260, test_acc: 97.83%
Epoch 250 - loss: 0.9147, acc: 99.01% / test_loss: 0.9241, test_acc: 98.07%
Epoch 251 - loss: 0.9143, acc: 99.06% / test_loss: 0.9233, test_acc: 98.15%
Epoch 252 - loss: 0.9144, acc: 99.05% / test_loss: 0.9215, test_acc: 98.33%
Epoch 253 - loss: 0.9138, acc: 99.10% / test_loss: 0.9226, test_acc: 98.20%
Epoch 254 - loss: 0.9154, acc: 98.96% / test_loss: 0.9241, test_acc: 98.07%
Epoch 255 - loss: 0.9140, acc: 99.09% / test_loss: 0.9218, test_acc: 98.29%
Epoch 256 - loss: 0.9133, acc: 99.15% / test_loss: 0.9236, test_acc: 98.10%
Epoch 257 - loss: 0.9154, acc: 98.94% / test_loss: 0.9236, test_acc: 98.11%
Epoch 258 - loss: 0.9152, acc: 99.00% / test_loss: 0.9221, test_acc: 98.27%
Epoch 259 - loss: 0.9141, acc: 99.06% / test_loss: 0.9220, test_acc: 98.28%
Epoch 260 - loss: 0.9137, acc: 99.11% / test_loss: 0.9241, test_acc: 98.09%
Epoch 261 - loss: 0.9146, acc: 99.01% / test_loss: 0.9258, test_acc: 97.91%
Epoch 262 - loss: 0.9141, acc: 99.09% / test_loss: 0.9245, test_acc: 98.04%
Epoch 263 - loss: 0.9155, acc: 98.92% / test_loss: 0.9252, test_acc: 97.95%
Epoch 264 - loss: 0.9166, acc: 98.81% / test_loss: 0.9278, test_acc: 97.72%
Epoch 265 - loss: 0.9152, acc: 98.97% / test_loss: 0.9236, test_acc: 98.12%
Epoch 266 - loss: 0.9132, acc: 99.17% / test_loss: 0.9245, test_acc: 98.04%
Epoch 267 - loss: 0.9149, acc: 98.97% / test_loss: 0.9230, test_acc: 98.15%
Epoch 268 - loss: 0.9131, acc: 99.18% / test_loss: 0.9226, test_acc: 98.21%
Epoch 269 - loss: 0.9153, acc: 98.94% / test_loss: 0.9235, test_acc: 98.14%
Epoch 270 - loss: 0.9159, acc: 98.88% / test_loss: 0.9229, test_acc: 98.21%
Epoch 271 - loss: 0.9145, acc: 99.03% / test_loss: 0.9236, test_acc: 98.10%
Epoch 272 - loss: 0.9139, acc: 99.09% / test_loss: 0.9222, test_acc: 98.26%
Epoch 273 - loss: 0.9134, acc: 99.15% / test_loss: 0.9320, test_acc: 97.25%
Epoch 274 - loss: 0.9154, acc: 98.92% / test_loss: 0.9221, test_acc: 98.27%
Epoch 275 - loss: 0.9140, acc: 99.09% / test_loss: 0.9220, test_acc: 98.26%
Epoch 276 - loss: 0.9137, acc: 99.13% / test_loss: 0.9226, test_acc: 98.24%
Epoch 277 - loss: 0.9136, acc: 99.13% / test_loss: 0.9252, test_acc: 97.95%
Epoch 278 - loss: 0.9147, acc: 99.02% / test_loss: 0.9218, test_acc: 98.29%
Epoch 279 - loss: 0.9155, acc: 98.94% / test_loss: 0.9250, test_acc: 97.95%
Epoch 280 - loss: 0.9165, acc: 98.85% / test_loss: 0.9221, test_acc: 98.29%
Epoch 281 - loss: 0.9141, acc: 99.06% / test_loss: 0.9208, test_acc: 98.41%
Epoch 282 - loss: 0.9130, acc: 99.18% / test_loss: 0.9215, test_acc: 98.32%
Epoch 283 - loss: 0.9134, acc: 99.15% / test_loss: 0.9225, test_acc: 98.23%
Epoch 284 - loss: 0.9138, acc: 99.10% / test_loss: 0.9228, test_acc: 98.16%
Epoch 285 - loss: 0.9151, acc: 98.98% / test_loss: 0.9251, test_acc: 97.98%
Epoch 286 - loss: 0.9155, acc: 98.93% / test_loss: 0.9247, test_acc: 97.98%
Epoch 287 - loss: 0.9157, acc: 98.94% / test_loss: 0.9233, test_acc: 98.15%
Epoch 288 - loss: 0.9156, acc: 98.92% / test_loss: 0.9236, test_acc: 98.15%
Epoch 289 - loss: 0.9142, acc: 99.07% / test_loss: 0.9220, test_acc: 98.30%
Epoch 290 - loss: 0.9137, acc: 99.12% / test_loss: 0.9212, test_acc: 98.36%
Epoch 291 - loss: 0.9133, acc: 99.15% / test_loss: 0.9212, test_acc: 98.34%
Epoch 292 - loss: 0.9132, acc: 99.16% / test_loss: 0.9219, test_acc: 98.32%
Epoch 293 - loss: 0.9147, acc: 99.00% / test_loss: 0.9305, test_acc: 97.41%
Epoch 294 - loss: 0.9151, acc: 98.97% / test_loss: 0.9223, test_acc: 98.25%
Epoch 295 - loss: 0.9154, acc: 98.94% / test_loss: 0.9220, test_acc: 98.26%
Epoch 296 - loss: 0.9143, acc: 99.05% / test_loss: 0.9230, test_acc: 98.17%
Epoch 297 - loss: 0.9140, acc: 99.08% / test_loss: 0.9208, test_acc: 98.41%
Epoch 298 - loss: 0.9133, acc: 99.15% / test_loss: 0.9207, test_acc: 98.41%
Epoch 299 - loss: 0.9134, acc: 99.15% / test_loss: 0.9211, test_acc: 98.38%
Epoch 300 - loss: 0.9136, acc: 99.15% / test_loss: 0.9230, test_acc: 98.18%
Epoch 301 - loss: 0.9147, acc: 99.02% / test_loss: 0.9243, test_acc: 98.04%
Epoch 302 - loss: 0.9139, acc: 99.08% / test_loss: 0.9216, test_acc: 98.30%
Epoch 303 - loss: 0.9134, acc: 99.15% / test_loss: 0.9233, test_acc: 98.14%
Epoch 304 - loss: 0.9159, acc: 98.90% / test_loss: 0.9263, test_acc: 97.84%
Epoch 305 - loss: 0.9133, acc: 99.17% / test_loss: 0.9215, test_acc: 98.34%
Epoch 306 - loss: 0.9126, acc: 99.23% / test_loss: 0.9229, test_acc: 98.19%
Epoch 307 - loss: 0.9134, acc: 99.15% / test_loss: 0.9242, test_acc: 98.07%
Epoch 308 - loss: 0.9133, acc: 99.16% / test_loss: 0.9230, test_acc: 98.19%
Epoch 309 - loss: 0.9131, acc: 99.16% / test_loss: 0.9215, test_acc: 98.32%
Epoch 310 - loss: 0.9152, acc: 98.97% / test_loss: 0.9224, test_acc: 98.23%
Epoch 311 - loss: 0.9132, acc: 99.17% / test_loss: 0.9231, test_acc: 98.14%
Epoch 312 - loss: 0.9124, acc: 99.24% / test_loss: 0.9232, test_acc: 98.13%
Epoch 313 - loss: 0.9141, acc: 99.05% / test_loss: 0.9241, test_acc: 98.05%
Epoch 314 - loss: 0.9129, acc: 99.20% / test_loss: 0.9254, test_acc: 97.93%
Epoch 315 - loss: 0.9156, acc: 98.90% / test_loss: 0.9240, test_acc: 98.06%
Epoch 316 - loss: 0.9137, acc: 99.12% / test_loss: 0.9217, test_acc: 98.32%
Epoch 317 - loss: 0.9126, acc: 99.22% / test_loss: 0.9219, test_acc: 98.32%
Epoch 318 - loss: 0.9136, acc: 99.13% / test_loss: 0.9220, test_acc: 98.29%
Epoch 319 - loss: 0.9134, acc: 99.15% / test_loss: 0.9211, test_acc: 98.37%
Epoch 320 - loss: 0.9144, acc: 99.03% / test_loss: 0.9231, test_acc: 98.17%
Epoch 321 - loss: 0.9145, acc: 99.03% / test_loss: 0.9226, test_acc: 98.17%
Epoch 322 - loss: 0.9125, acc: 99.23% / test_loss: 0.9227, test_acc: 98.20%
Epoch 323 - loss: 0.9142, acc: 99.07% / test_loss: 0.9230, test_acc: 98.20%
Epoch 324 - loss: 0.9137, acc: 99.12% / test_loss: 0.9265, test_acc: 97.83%
Epoch 325 - loss: 0.9147, acc: 99.02% / test_loss: 0.9220, test_acc: 98.30%
Epoch 326 - loss: 0.9130, acc: 99.19% / test_loss: 0.9218, test_acc: 98.31%
Epoch 327 - loss: 0.9127, acc: 99.21% / test_loss: 0.9222, test_acc: 98.26%
Epoch 328 - loss: 0.9132, acc: 99.17% / test_loss: 0.9235, test_acc: 98.13%
Epoch 329 - loss: 0.9166, acc: 98.81% / test_loss: 0.9239, test_acc: 98.07%
Epoch 330 - loss: 0.9141, acc: 99.07% / test_loss: 0.9242, test_acc: 98.05%
Epoch 331 - loss: 0.9131, acc: 99.17% / test_loss: 0.9243, test_acc: 98.06%
Epoch 332 - loss: 0.9133, acc: 99.16% / test_loss: 0.9222, test_acc: 98.26%
Epoch 333 - loss: 0.9129, acc: 99.18% / test_loss: 0.9228, test_acc: 98.18%
Epoch 334 - loss: 0.9130, acc: 99.19% / test_loss: 0.9227, test_acc: 98.21%
Epoch 335 - loss: 0.9150, acc: 98.99% / test_loss: 0.9233, test_acc: 98.14%
Epoch 336 - loss: 0.9152, acc: 98.95% / test_loss: 0.9229, test_acc: 98.21%
Epoch 337 - loss: 0.9133, acc: 99.15% / test_loss: 0.9219, test_acc: 98.29%
Epoch 338 - loss: 0.9143, acc: 99.03% / test_loss: 0.9233, test_acc: 98.16%
Epoch 339 - loss: 0.9153, acc: 98.95% / test_loss: 0.9243, test_acc: 98.03%
Epoch 340 - loss: 0.9146, acc: 99.03% / test_loss: 0.9228, test_acc: 98.17%
Epoch 341 - loss: 0.9126, acc: 99.22% / test_loss: 0.9223, test_acc: 98.25%
Epoch 342 - loss: 0.9124, acc: 99.24% / test_loss: 0.9222, test_acc: 98.26%
Epoch 343 - loss: 0.9124, acc: 99.24% / test_loss: 0.9219, test_acc: 98.30%
Epoch 344 - loss: 0.9124, acc: 99.24% / test_loss: 0.9217, test_acc: 98.32%
Epoch 345 - loss: 0.9124, acc: 99.24% / test_loss: 0.9221, test_acc: 98.28%
Epoch 346 - loss: 0.9124, acc: 99.24% / test_loss: 0.9218, test_acc: 98.28%
Epoch 347 - loss: 0.9124, acc: 99.24% / test_loss: 0.9223, test_acc: 98.22%
Epoch 348 - loss: 0.9124, acc: 99.24% / test_loss: 0.9227, test_acc: 98.19%
Epoch 349 - loss: 0.9196, acc: 98.52% / test_loss: 0.9231, test_acc: 98.15%
Epoch 350 - loss: 0.9151, acc: 98.97% / test_loss: 0.9235, test_acc: 98.11%
Epoch 351 - loss: 0.9137, acc: 99.11% / test_loss: 0.9290, test_acc: 97.59%
Epoch 352 - loss: 0.9151, acc: 98.97% / test_loss: 0.9241, test_acc: 98.07%
Epoch 353 - loss: 0.9133, acc: 99.15% / test_loss: 0.9217, test_acc: 98.29%
Epoch 354 - loss: 0.9127, acc: 99.21% / test_loss: 0.9214, test_acc: 98.35%
Epoch 355 - loss: 0.9126, acc: 99.22% / test_loss: 0.9217, test_acc: 98.31%
Epoch 356 - loss: 0.9139, acc: 99.09% / test_loss: 0.9387, test_acc: 96.62%
Epoch 357 - loss: 0.9163, acc: 98.85% / test_loss: 0.9216, test_acc: 98.31%
Epoch 358 - loss: 0.9149, acc: 99.00% / test_loss: 0.9229, test_acc: 98.20%
Epoch 359 - loss: 0.9143, acc: 99.05% / test_loss: 0.9251, test_acc: 97.96%
Epoch 360 - loss: 0.9136, acc: 99.12% / test_loss: 0.9227, test_acc: 98.19%
Epoch 361 - loss: 0.9139, acc: 99.11% / test_loss: 0.9223, test_acc: 98.24%
Epoch 362 - loss: 0.9140, acc: 99.08% / test_loss: 0.9221, test_acc: 98.29%
Epoch 363 - loss: 0.9141, acc: 99.07% / test_loss: 0.9232, test_acc: 98.14%
Epoch 364 - loss: 0.9141, acc: 99.06% / test_loss: 0.9222, test_acc: 98.23%
Epoch 365 - loss: 0.9136, acc: 99.12% / test_loss: 0.9258, test_acc: 97.94%
Epoch 366 - loss: 0.9142, acc: 99.06% / test_loss: 0.9242, test_acc: 98.07%
Epoch 367 - loss: 0.9148, acc: 99.01% / test_loss: 0.9222, test_acc: 98.21%
Epoch 368 - loss: 0.9127, acc: 99.21% / test_loss: 0.9221, test_acc: 98.26%
Epoch 369 - loss: 0.9127, acc: 99.21% / test_loss: 0.9215, test_acc: 98.32%
Epoch 370 - loss: 0.9136, acc: 99.12% / test_loss: 0.9215, test_acc: 98.31%
Epoch 371 - loss: 0.9124, acc: 99.24% / test_loss: 0.9216, test_acc: 98.32%
Epoch 372 - loss: 0.9124, acc: 99.24% / test_loss: 0.9211, test_acc: 98.39%
Epoch 373 - loss: 0.9152, acc: 98.97% / test_loss: 0.9226, test_acc: 98.22%
Epoch 374 - loss: 0.9142, acc: 99.06% / test_loss: 0.9234, test_acc: 98.13%
Epoch 375 - loss: 0.9135, acc: 99.15% / test_loss: 0.9227, test_acc: 98.20%
Epoch 376 - loss: 0.9140, acc: 99.09% / test_loss: 0.9220, test_acc: 98.28%
Epoch 377 - loss: 0.9131, acc: 99.18% / test_loss: 0.9224, test_acc: 98.25%
Epoch 378 - loss: 0.9140, acc: 99.09% / test_loss: 0.9239, test_acc: 98.07%
Epoch 379 - loss: 0.9134, acc: 99.18% / test_loss: 0.9215, test_acc: 98.33%
Epoch 380 - loss: 0.9123, acc: 99.25% / test_loss: 0.9227, test_acc: 98.18%
Epoch 381 - loss: 0.9141, acc: 99.07% / test_loss: 0.9218, test_acc: 98.32%
Epoch 382 - loss: 0.9127, acc: 99.21% / test_loss: 0.9219, test_acc: 98.29%
Epoch 383 - loss: 0.9129, acc: 99.19% / test_loss: 0.9220, test_acc: 98.27%
Epoch 384 - loss: 0.9130, acc: 99.18% / test_loss: 0.9224, test_acc: 98.23%
Epoch 385 - loss: 0.9135, acc: 99.15% / test_loss: 0.9221, test_acc: 98.28%
Epoch 386 - loss: 0.9131, acc: 99.19% / test_loss: 0.9220, test_acc: 98.29%
Epoch 387 - loss: 0.9129, acc: 99.20% / test_loss: 0.9222, test_acc: 98.23%
Epoch 388 - loss: 0.9157, acc: 98.90% / test_loss: 0.9247, test_acc: 97.98%
Epoch 389 - loss: 0.9143, acc: 99.05% / test_loss: 0.9233, test_acc: 98.17%
Epoch 390 - loss: 0.9133, acc: 99.15% / test_loss: 0.9240, test_acc: 98.10%
Epoch 391 - loss: 0.9135, acc: 99.13% / test_loss: 0.9241, test_acc: 98.07%
Epoch 392 - loss: 0.9155, acc: 98.93% / test_loss: 0.9231, test_acc: 98.17%
Epoch 393 - loss: 0.9144, acc: 99.04% / test_loss: 0.9233, test_acc: 98.16%
Epoch 394 - loss: 0.9132, acc: 99.17% / test_loss: 0.9230, test_acc: 98.17%
Epoch 395 - loss: 0.9131, acc: 99.19% / test_loss: 0.9220, test_acc: 98.29%
Epoch 396 - loss: 0.9141, acc: 99.06% / test_loss: 0.9229, test_acc: 98.18%
Epoch 397 - loss: 0.9152, acc: 98.96% / test_loss: 0.9232, test_acc: 98.14%
Epoch 398 - loss: 0.9130, acc: 99.18% / test_loss: 0.9233, test_acc: 98.17%
Epoch 399 - loss: 0.9133, acc: 99.16% / test_loss: 0.9233, test_acc: 98.14%
Epoch 400 - loss: 0.9144, acc: 99.06% / test_loss: 0.9259, test_acc: 97.89%
Best test accuracy 98.41% in epoch 281.
----------------------------------------------------------------------------------------------------
Run 4
Epoch 1 - loss: 1.3437, acc: 57.12% / test_loss: 1.2475, test_acc: 63.71%
Epoch 2 - loss: 1.1105, acc: 80.26% / test_loss: 1.0988, test_acc: 80.91%
Epoch 3 - loss: 1.0926, acc: 81.47% / test_loss: 1.0953, test_acc: 81.34%
Epoch 4 - loss: 1.0871, acc: 81.77% / test_loss: 1.0791, test_acc: 82.70%
Epoch 5 - loss: 1.0826, acc: 82.20% / test_loss: 1.0871, test_acc: 81.94%
Epoch 6 - loss: 1.0662, acc: 84.01% / test_loss: 1.0328, test_acc: 87.56%
Epoch 7 - loss: 1.0377, acc: 86.99% / test_loss: 1.0446, test_acc: 86.19%
Epoch 8 - loss: 1.0316, acc: 87.49% / test_loss: 1.0257, test_acc: 88.04%
Epoch 9 - loss: 1.0265, acc: 87.92% / test_loss: 1.0217, test_acc: 88.30%
Epoch 10 - loss: 1.0262, acc: 87.92% / test_loss: 1.0162, test_acc: 88.89%
Epoch 11 - loss: 1.0244, acc: 88.03% / test_loss: 1.0166, test_acc: 88.84%
Epoch 12 - loss: 1.0222, acc: 88.31% / test_loss: 1.0172, test_acc: 88.76%
Epoch 13 - loss: 1.0208, acc: 88.46% / test_loss: 1.0245, test_acc: 88.09%
Epoch 14 - loss: 1.0207, acc: 88.37% / test_loss: 1.0138, test_acc: 89.04%
Epoch 15 - loss: 1.0204, acc: 88.35% / test_loss: 1.0114, test_acc: 89.26%
Epoch 16 - loss: 1.0175, acc: 88.70% / test_loss: 1.0115, test_acc: 89.26%
Epoch 17 - loss: 1.0177, acc: 88.64% / test_loss: 1.0099, test_acc: 89.41%
Epoch 18 - loss: 1.0140, acc: 89.07% / test_loss: 1.0100, test_acc: 89.41%
Epoch 19 - loss: 1.0145, acc: 88.95% / test_loss: 1.0105, test_acc: 89.29%
Epoch 20 - loss: 1.0132, acc: 89.01% / test_loss: 1.0053, test_acc: 89.83%
Epoch 21 - loss: 1.0108, acc: 89.31% / test_loss: 1.0074, test_acc: 89.60%
Epoch 22 - loss: 1.0094, acc: 89.48% / test_loss: 1.0071, test_acc: 89.68%
Epoch 23 - loss: 1.0110, acc: 89.27% / test_loss: 1.0045, test_acc: 89.88%
Epoch 24 - loss: 1.0094, acc: 89.37% / test_loss: 1.0033, test_acc: 90.00%
Epoch 25 - loss: 1.0072, acc: 89.60% / test_loss: 1.0055, test_acc: 89.91%
Epoch 26 - loss: 1.0083, acc: 89.53% / test_loss: 1.0047, test_acc: 89.90%
Epoch 27 - loss: 1.0070, acc: 89.63% / test_loss: 1.0041, test_acc: 89.88%
Epoch 28 - loss: 1.0071, acc: 89.60% / test_loss: 1.0033, test_acc: 89.99%
Epoch 29 - loss: 1.0067, acc: 89.66% / test_loss: 1.0051, test_acc: 89.88%
Epoch 30 - loss: 1.0077, acc: 89.57% / test_loss: 1.0046, test_acc: 89.90%
Epoch 31 - loss: 1.0059, acc: 89.73% / test_loss: 1.0034, test_acc: 90.00%
Epoch 32 - loss: 1.0063, acc: 89.69% / test_loss: 1.0001, test_acc: 90.28%
Epoch 33 - loss: 1.0047, acc: 89.83% / test_loss: 1.0032, test_acc: 90.03%
Epoch 34 - loss: 1.0046, acc: 89.84% / test_loss: 1.0022, test_acc: 90.13%
Epoch 35 - loss: 1.0053, acc: 89.77% / test_loss: 1.0066, test_acc: 89.75%
Epoch 36 - loss: 1.0054, acc: 89.74% / test_loss: 1.0044, test_acc: 89.97%
Epoch 37 - loss: 1.0040, acc: 89.93% / test_loss: 1.0001, test_acc: 90.26%
Epoch 38 - loss: 1.0032, acc: 89.95% / test_loss: 0.9983, test_acc: 90.41%
Epoch 39 - loss: 1.0015, acc: 90.13% / test_loss: 0.9990, test_acc: 90.31%
Epoch 40 - loss: 1.0025, acc: 90.05% / test_loss: 0.9996, test_acc: 90.28%
Epoch 41 - loss: 1.0020, acc: 90.07% / test_loss: 1.0001, test_acc: 90.24%
Epoch 42 - loss: 1.0013, acc: 90.16% / test_loss: 0.9975, test_acc: 90.43%
Epoch 43 - loss: 1.0002, acc: 90.29% / test_loss: 0.9976, test_acc: 90.52%
Epoch 44 - loss: 1.0006, acc: 90.25% / test_loss: 1.0012, test_acc: 90.17%
Epoch 45 - loss: 1.0014, acc: 90.11% / test_loss: 0.9997, test_acc: 90.26%
Epoch 46 - loss: 1.0000, acc: 90.26% / test_loss: 0.9977, test_acc: 90.44%
Epoch 47 - loss: 1.0002, acc: 90.27% / test_loss: 0.9970, test_acc: 90.54%
Epoch 48 - loss: 0.9983, acc: 90.42% / test_loss: 0.9966, test_acc: 90.55%
Epoch 49 - loss: 0.9995, acc: 90.35% / test_loss: 1.0001, test_acc: 90.28%
Epoch 50 - loss: 0.9988, acc: 90.37% / test_loss: 0.9962, test_acc: 90.65%
Epoch 51 - loss: 0.9969, acc: 90.52% / test_loss: 0.9962, test_acc: 90.62%
Epoch 52 - loss: 0.9994, acc: 90.30% / test_loss: 0.9962, test_acc: 90.62%
Epoch 53 - loss: 0.9983, acc: 90.44% / test_loss: 0.9967, test_acc: 90.62%
Epoch 54 - loss: 0.9956, acc: 90.69% / test_loss: 0.9940, test_acc: 90.83%
Epoch 55 - loss: 0.9947, acc: 90.74% / test_loss: 0.9951, test_acc: 90.70%
Epoch 56 - loss: 0.9941, acc: 90.83% / test_loss: 0.9967, test_acc: 90.63%
Epoch 57 - loss: 0.9951, acc: 90.71% / test_loss: 0.9931, test_acc: 90.96%
Epoch 58 - loss: 0.9932, acc: 90.92% / test_loss: 0.9918, test_acc: 90.99%
Epoch 59 - loss: 0.9918, acc: 91.06% / test_loss: 0.9881, test_acc: 91.44%
Epoch 60 - loss: 0.9877, acc: 91.42% / test_loss: 0.9868, test_acc: 91.55%
Epoch 61 - loss: 0.9858, acc: 91.66% / test_loss: 0.9905, test_acc: 91.26%
Epoch 62 - loss: 0.9846, acc: 91.76% / test_loss: 0.9843, test_acc: 91.82%
Epoch 63 - loss: 0.9821, acc: 91.98% / test_loss: 0.9843, test_acc: 91.73%
Epoch 64 - loss: 0.9815, acc: 92.10% / test_loss: 0.9820, test_acc: 92.04%
Epoch 65 - loss: 0.9795, acc: 92.28% / test_loss: 0.9836, test_acc: 91.93%
Epoch 66 - loss: 0.9797, acc: 92.25% / test_loss: 0.9816, test_acc: 92.02%
Epoch 67 - loss: 0.9790, acc: 92.32% / test_loss: 0.9812, test_acc: 92.06%
Epoch 68 - loss: 0.9791, acc: 92.32% / test_loss: 0.9796, test_acc: 92.31%
Epoch 69 - loss: 0.9785, acc: 92.38% / test_loss: 0.9790, test_acc: 92.30%
Epoch 70 - loss: 0.9777, acc: 92.46% / test_loss: 0.9791, test_acc: 92.31%
Epoch 71 - loss: 0.9778, acc: 92.40% / test_loss: 0.9864, test_acc: 91.64%
Epoch 72 - loss: 0.9772, acc: 92.50% / test_loss: 0.9788, test_acc: 92.30%
Epoch 73 - loss: 0.9768, acc: 92.51% / test_loss: 0.9797, test_acc: 92.22%
Epoch 74 - loss: 0.9775, acc: 92.42% / test_loss: 0.9827, test_acc: 91.96%
Epoch 75 - loss: 0.9765, acc: 92.51% / test_loss: 0.9778, test_acc: 92.41%
Epoch 76 - loss: 0.9756, acc: 92.64% / test_loss: 0.9792, test_acc: 92.28%
Epoch 77 - loss: 0.9756, acc: 92.65% / test_loss: 0.9796, test_acc: 92.22%
Epoch 78 - loss: 0.9763, acc: 92.53% / test_loss: 0.9793, test_acc: 92.32%
Epoch 79 - loss: 0.9751, acc: 92.65% / test_loss: 0.9820, test_acc: 91.99%
Epoch 80 - loss: 0.9754, acc: 92.63% / test_loss: 0.9773, test_acc: 92.42%
Epoch 81 - loss: 0.9751, acc: 92.69% / test_loss: 0.9917, test_acc: 90.98%
Epoch 82 - loss: 0.9766, acc: 92.55% / test_loss: 0.9811, test_acc: 92.07%
Epoch 83 - loss: 0.9750, acc: 92.69% / test_loss: 0.9783, test_acc: 92.37%
Epoch 84 - loss: 0.9737, acc: 92.82% / test_loss: 0.9772, test_acc: 92.46%
Epoch 85 - loss: 0.9743, acc: 92.76% / test_loss: 0.9784, test_acc: 92.36%
Epoch 86 - loss: 0.9742, acc: 92.76% / test_loss: 0.9765, test_acc: 92.53%
Epoch 87 - loss: 0.9734, acc: 92.84% / test_loss: 0.9798, test_acc: 92.27%
Epoch 88 - loss: 0.9731, acc: 92.86% / test_loss: 0.9762, test_acc: 92.56%
Epoch 89 - loss: 0.9731, acc: 92.87% / test_loss: 0.9765, test_acc: 92.45%
Epoch 90 - loss: 0.9734, acc: 92.82% / test_loss: 0.9763, test_acc: 92.56%
Epoch 91 - loss: 0.9729, acc: 92.88% / test_loss: 0.9764, test_acc: 92.55%
Epoch 92 - loss: 0.9740, acc: 92.77% / test_loss: 0.9764, test_acc: 92.53%
Epoch 93 - loss: 0.9718, acc: 93.04% / test_loss: 0.9737, test_acc: 92.79%
Epoch 94 - loss: 0.9711, acc: 93.02% / test_loss: 0.9751, test_acc: 92.71%
Epoch 95 - loss: 0.9712, acc: 93.05% / test_loss: 0.9750, test_acc: 92.65%
Epoch 96 - loss: 0.9718, acc: 93.01% / test_loss: 0.9747, test_acc: 92.72%
Epoch 97 - loss: 0.9704, acc: 93.11% / test_loss: 0.9752, test_acc: 92.62%
Epoch 98 - loss: 0.9700, acc: 93.16% / test_loss: 0.9749, test_acc: 92.62%
Epoch 99 - loss: 0.9713, acc: 93.03% / test_loss: 0.9738, test_acc: 92.80%
Epoch 100 - loss: 0.9699, acc: 93.16% / test_loss: 0.9738, test_acc: 92.74%
Epoch 101 - loss: 0.9692, acc: 93.19% / test_loss: 0.9732, test_acc: 92.80%
Epoch 102 - loss: 0.9714, acc: 92.99% / test_loss: 0.9733, test_acc: 92.83%
Epoch 103 - loss: 0.9691, acc: 93.24% / test_loss: 0.9736, test_acc: 92.77%
Epoch 104 - loss: 0.9687, acc: 93.25% / test_loss: 0.9770, test_acc: 92.53%
Epoch 105 - loss: 0.9711, acc: 93.05% / test_loss: 0.9729, test_acc: 92.88%
Epoch 106 - loss: 0.9690, acc: 93.22% / test_loss: 0.9737, test_acc: 92.71%
Epoch 107 - loss: 0.9681, acc: 93.30% / test_loss: 0.9716, test_acc: 92.92%
Epoch 108 - loss: 0.9683, acc: 93.30% / test_loss: 0.9755, test_acc: 92.63%
Epoch 109 - loss: 0.9683, acc: 93.30% / test_loss: 0.9759, test_acc: 92.53%
Epoch 110 - loss: 0.9686, acc: 93.30% / test_loss: 0.9750, test_acc: 92.69%
Epoch 111 - loss: 0.9688, acc: 93.25% / test_loss: 0.9728, test_acc: 92.89%
Epoch 112 - loss: 0.9683, acc: 93.30% / test_loss: 0.9741, test_acc: 92.72%
Epoch 113 - loss: 0.9688, acc: 93.27% / test_loss: 0.9720, test_acc: 92.90%
Epoch 114 - loss: 0.9684, acc: 93.27% / test_loss: 0.9724, test_acc: 92.86%
Epoch 115 - loss: 0.9678, acc: 93.33% / test_loss: 0.9738, test_acc: 92.77%
Epoch 116 - loss: 0.9685, acc: 93.27% / test_loss: 0.9709, test_acc: 93.05%
Epoch 117 - loss: 0.9682, acc: 93.33% / test_loss: 0.9729, test_acc: 92.91%
Epoch 118 - loss: 0.9679, acc: 93.35% / test_loss: 0.9720, test_acc: 92.93%
Epoch 119 - loss: 0.9676, acc: 93.37% / test_loss: 0.9711, test_acc: 93.02%
Epoch 120 - loss: 0.9658, acc: 93.52% / test_loss: 0.9714, test_acc: 93.01%
Epoch 121 - loss: 0.9668, acc: 93.43% / test_loss: 0.9704, test_acc: 93.14%
Epoch 122 - loss: 0.9672, acc: 93.42% / test_loss: 0.9712, test_acc: 92.99%
Epoch 123 - loss: 0.9663, acc: 93.48% / test_loss: 0.9702, test_acc: 93.08%
Epoch 124 - loss: 0.9656, acc: 93.52% / test_loss: 0.9732, test_acc: 92.77%
Epoch 125 - loss: 0.9680, acc: 93.32% / test_loss: 0.9775, test_acc: 92.32%
Epoch 126 - loss: 0.9661, acc: 93.51% / test_loss: 0.9718, test_acc: 92.95%
Epoch 127 - loss: 0.9658, acc: 93.54% / test_loss: 0.9721, test_acc: 92.85%
Epoch 128 - loss: 0.9673, acc: 93.39% / test_loss: 0.9706, test_acc: 93.05%
Epoch 129 - loss: 0.9666, acc: 93.45% / test_loss: 0.9736, test_acc: 92.71%
Epoch 130 - loss: 0.9644, acc: 93.70% / test_loss: 0.9696, test_acc: 93.18%
Epoch 131 - loss: 0.9652, acc: 93.60% / test_loss: 0.9727, test_acc: 92.81%
Epoch 132 - loss: 0.9628, acc: 93.83% / test_loss: 0.9638, test_acc: 93.40%
Epoch 133 - loss: 0.9226, acc: 98.22% / test_loss: 0.9274, test_acc: 97.77%
Epoch 134 - loss: 0.9188, acc: 98.63% / test_loss: 0.9277, test_acc: 97.73%
Epoch 135 - loss: 0.9193, acc: 98.59% / test_loss: 0.9274, test_acc: 97.77%
Epoch 136 - loss: 0.9185, acc: 98.65% / test_loss: 0.9298, test_acc: 97.54%
Epoch 137 - loss: 0.9193, acc: 98.57% / test_loss: 0.9274, test_acc: 97.77%
Epoch 138 - loss: 0.9187, acc: 98.65% / test_loss: 0.9268, test_acc: 97.80%
Epoch 139 - loss: 0.9185, acc: 98.65% / test_loss: 0.9264, test_acc: 97.82%
Epoch 140 - loss: 0.9176, acc: 98.75% / test_loss: 0.9260, test_acc: 97.87%
Epoch 141 - loss: 0.9194, acc: 98.55% / test_loss: 0.9251, test_acc: 98.00%
Epoch 142 - loss: 0.9186, acc: 98.64% / test_loss: 0.9238, test_acc: 98.11%
Epoch 143 - loss: 0.9187, acc: 98.62% / test_loss: 0.9299, test_acc: 97.55%
Epoch 144 - loss: 0.9193, acc: 98.59% / test_loss: 0.9282, test_acc: 97.66%
Epoch 145 - loss: 0.9193, acc: 98.54% / test_loss: 0.9251, test_acc: 97.93%
Epoch 146 - loss: 0.9172, acc: 98.78% / test_loss: 0.9255, test_acc: 97.95%
Epoch 147 - loss: 0.9172, acc: 98.78% / test_loss: 0.9250, test_acc: 97.95%
Epoch 148 - loss: 0.9181, acc: 98.66% / test_loss: 0.9264, test_acc: 97.83%
Epoch 149 - loss: 0.9183, acc: 98.68% / test_loss: 0.9248, test_acc: 98.01%
Epoch 150 - loss: 0.9181, acc: 98.67% / test_loss: 0.9268, test_acc: 97.76%
Epoch 151 - loss: 0.9176, acc: 98.75% / test_loss: 0.9249, test_acc: 97.98%
Epoch 152 - loss: 0.9182, acc: 98.67% / test_loss: 0.9243, test_acc: 98.07%
Epoch 153 - loss: 0.9180, acc: 98.70% / test_loss: 0.9238, test_acc: 98.10%
Epoch 154 - loss: 0.9162, acc: 98.89% / test_loss: 0.9254, test_acc: 97.93%
Epoch 155 - loss: 0.9180, acc: 98.68% / test_loss: 0.9248, test_acc: 97.98%
Epoch 156 - loss: 0.9168, acc: 98.82% / test_loss: 0.9244, test_acc: 98.04%
Epoch 157 - loss: 0.9167, acc: 98.81% / test_loss: 0.9245, test_acc: 98.03%
Epoch 158 - loss: 0.9179, acc: 98.71% / test_loss: 0.9250, test_acc: 97.98%
Epoch 159 - loss: 0.9162, acc: 98.87% / test_loss: 0.9244, test_acc: 98.07%
Epoch 160 - loss: 0.9178, acc: 98.72% / test_loss: 0.9267, test_acc: 97.80%
Epoch 161 - loss: 0.9168, acc: 98.82% / test_loss: 0.9274, test_acc: 97.73%
Epoch 162 - loss: 0.9189, acc: 98.61% / test_loss: 0.9246, test_acc: 97.99%
Epoch 163 - loss: 0.9172, acc: 98.77% / test_loss: 0.9243, test_acc: 98.03%
Epoch 164 - loss: 0.9182, acc: 98.68% / test_loss: 0.9259, test_acc: 97.92%
Epoch 165 - loss: 0.9177, acc: 98.72% / test_loss: 0.9252, test_acc: 97.92%
Epoch 166 - loss: 0.9165, acc: 98.83% / test_loss: 0.9248, test_acc: 97.98%
Epoch 167 - loss: 0.9164, acc: 98.85% / test_loss: 0.9249, test_acc: 98.02%
Epoch 168 - loss: 0.9174, acc: 98.75% / test_loss: 0.9245, test_acc: 98.05%
Epoch 169 - loss: 0.9170, acc: 98.78% / test_loss: 0.9237, test_acc: 98.13%
Epoch 170 - loss: 0.9168, acc: 98.80% / test_loss: 0.9252, test_acc: 97.93%
Epoch 171 - loss: 0.9173, acc: 98.78% / test_loss: 0.9254, test_acc: 97.92%
Epoch 172 - loss: 0.9156, acc: 98.91% / test_loss: 0.9238, test_acc: 98.10%
Epoch 173 - loss: 0.9165, acc: 98.84% / test_loss: 0.9261, test_acc: 97.86%
Epoch 174 - loss: 0.9156, acc: 98.90% / test_loss: 0.9240, test_acc: 98.10%
Epoch 175 - loss: 0.9162, acc: 98.88% / test_loss: 0.9249, test_acc: 98.03%
Epoch 176 - loss: 0.9160, acc: 98.91% / test_loss: 0.9234, test_acc: 98.15%
Epoch 177 - loss: 0.9150, acc: 99.00% / test_loss: 0.9250, test_acc: 97.97%
Epoch 178 - loss: 0.9186, acc: 98.61% / test_loss: 0.9250, test_acc: 97.99%
Epoch 179 - loss: 0.9171, acc: 98.76% / test_loss: 0.9255, test_acc: 98.01%
Epoch 180 - loss: 0.9166, acc: 98.85% / test_loss: 0.9250, test_acc: 97.98%
Epoch 181 - loss: 0.9164, acc: 98.84% / test_loss: 0.9233, test_acc: 98.10%
Epoch 182 - loss: 0.9174, acc: 98.75% / test_loss: 0.9243, test_acc: 98.06%
Epoch 183 - loss: 0.9165, acc: 98.84% / test_loss: 0.9241, test_acc: 98.04%
Epoch 184 - loss: 0.9162, acc: 98.87% / test_loss: 0.9239, test_acc: 98.08%
Epoch 185 - loss: 0.9174, acc: 98.76% / test_loss: 0.9236, test_acc: 98.10%
Epoch 186 - loss: 0.9166, acc: 98.84% / test_loss: 0.9234, test_acc: 98.10%
Epoch 187 - loss: 0.9158, acc: 98.91% / test_loss: 0.9227, test_acc: 98.20%
Epoch 188 - loss: 0.9175, acc: 98.72% / test_loss: 0.9243, test_acc: 98.04%
Epoch 189 - loss: 0.9180, acc: 98.69% / test_loss: 0.9270, test_acc: 97.75%
Epoch 190 - loss: 0.9185, acc: 98.65% / test_loss: 0.9237, test_acc: 98.17%
Epoch 191 - loss: 0.9181, acc: 98.66% / test_loss: 0.9240, test_acc: 98.04%
Epoch 192 - loss: 0.9162, acc: 98.86% / test_loss: 0.9240, test_acc: 98.06%
Epoch 193 - loss: 0.9153, acc: 98.96% / test_loss: 0.9246, test_acc: 98.02%
Epoch 194 - loss: 0.9164, acc: 98.85% / test_loss: 0.9234, test_acc: 98.14%
Epoch 195 - loss: 0.9165, acc: 98.83% / test_loss: 0.9250, test_acc: 97.99%
Epoch 196 - loss: 0.9158, acc: 98.94% / test_loss: 0.9231, test_acc: 98.17%
Epoch 197 - loss: 0.9160, acc: 98.88% / test_loss: 0.9256, test_acc: 97.95%
Epoch 198 - loss: 0.9153, acc: 98.97% / test_loss: 0.9222, test_acc: 98.26%
Epoch 199 - loss: 0.9134, acc: 99.12% / test_loss: 0.9198, test_acc: 98.49%
Epoch 200 - loss: 0.9167, acc: 98.80% / test_loss: 0.9220, test_acc: 98.32%
Epoch 201 - loss: 0.9136, acc: 99.12% / test_loss: 0.9223, test_acc: 98.21%
Epoch 202 - loss: 0.9146, acc: 99.03% / test_loss: 0.9209, test_acc: 98.39%
Epoch 203 - loss: 0.9129, acc: 99.18% / test_loss: 0.9230, test_acc: 98.20%
Epoch 204 - loss: 0.9132, acc: 99.19% / test_loss: 0.9205, test_acc: 98.44%
Epoch 205 - loss: 0.9128, acc: 99.20% / test_loss: 0.9246, test_acc: 98.03%
Epoch 206 - loss: 0.9136, acc: 99.12% / test_loss: 0.9213, test_acc: 98.35%
Epoch 207 - loss: 0.9145, acc: 99.02% / test_loss: 0.9231, test_acc: 98.17%
Epoch 208 - loss: 0.9125, acc: 99.24% / test_loss: 0.9201, test_acc: 98.50%
Epoch 209 - loss: 0.9124, acc: 99.27% / test_loss: 0.9183, test_acc: 98.65%
Epoch 210 - loss: 0.9126, acc: 99.22% / test_loss: 0.9197, test_acc: 98.51%
Epoch 211 - loss: 0.9122, acc: 99.28% / test_loss: 0.9192, test_acc: 98.57%
Epoch 212 - loss: 0.9117, acc: 99.33% / test_loss: 0.9202, test_acc: 98.47%
Epoch 213 - loss: 0.9129, acc: 99.18% / test_loss: 0.9234, test_acc: 98.12%
Epoch 214 - loss: 0.9143, acc: 99.04% / test_loss: 0.9205, test_acc: 98.44%
Epoch 215 - loss: 0.9116, acc: 99.32% / test_loss: 0.9206, test_acc: 98.42%
Epoch 216 - loss: 0.9137, acc: 99.11% / test_loss: 0.9241, test_acc: 98.08%
Epoch 217 - loss: 0.9128, acc: 99.21% / test_loss: 0.9235, test_acc: 98.14%
Epoch 218 - loss: 0.9122, acc: 99.28% / test_loss: 0.9181, test_acc: 98.67%
Epoch 219 - loss: 0.9123, acc: 99.24% / test_loss: 0.9206, test_acc: 98.43%
Epoch 220 - loss: 0.9138, acc: 99.09% / test_loss: 0.9192, test_acc: 98.56%
Epoch 221 - loss: 0.9124, acc: 99.24% / test_loss: 0.9182, test_acc: 98.65%
Epoch 222 - loss: 0.9138, acc: 99.09% / test_loss: 0.9223, test_acc: 98.29%
Epoch 223 - loss: 0.9127, acc: 99.23% / test_loss: 0.9179, test_acc: 98.70%
Epoch 224 - loss: 0.9127, acc: 99.23% / test_loss: 0.9204, test_acc: 98.39%
Epoch 225 - loss: 0.9120, acc: 99.30% / test_loss: 0.9202, test_acc: 98.45%
Epoch 226 - loss: 0.9116, acc: 99.34% / test_loss: 0.9198, test_acc: 98.51%
Epoch 227 - loss: 0.9120, acc: 99.28% / test_loss: 0.9281, test_acc: 97.67%
Epoch 228 - loss: 0.9131, acc: 99.18% / test_loss: 0.9207, test_acc: 98.42%
Epoch 229 - loss: 0.9119, acc: 99.31% / test_loss: 0.9191, test_acc: 98.57%
Epoch 230 - loss: 0.9109, acc: 99.41% / test_loss: 0.9182, test_acc: 98.68%
Epoch 231 - loss: 0.9110, acc: 99.37% / test_loss: 0.9232, test_acc: 98.13%
Epoch 232 - loss: 0.9118, acc: 99.31% / test_loss: 0.9190, test_acc: 98.58%
Epoch 233 - loss: 0.9129, acc: 99.19% / test_loss: 0.9185, test_acc: 98.67%
Epoch 234 - loss: 0.9112, acc: 99.37% / test_loss: 0.9198, test_acc: 98.53%
Epoch 235 - loss: 0.9142, acc: 99.07% / test_loss: 0.9191, test_acc: 98.60%
Epoch 236 - loss: 0.9101, acc: 99.49% / test_loss: 0.9183, test_acc: 98.66%
Epoch 237 - loss: 0.9116, acc: 99.31% / test_loss: 0.9186, test_acc: 98.63%
Epoch 238 - loss: 0.9133, acc: 99.17% / test_loss: 0.9189, test_acc: 98.58%
Epoch 239 - loss: 0.9118, acc: 99.32% / test_loss: 0.9187, test_acc: 98.60%
Epoch 240 - loss: 0.9105, acc: 99.43% / test_loss: 0.9186, test_acc: 98.62%
Epoch 241 - loss: 0.9119, acc: 99.30% / test_loss: 0.9197, test_acc: 98.51%
Epoch 242 - loss: 0.9135, acc: 99.11% / test_loss: 0.9189, test_acc: 98.57%
Epoch 243 - loss: 0.9110, acc: 99.41% / test_loss: 0.9186, test_acc: 98.61%
Epoch 244 - loss: 0.9110, acc: 99.38% / test_loss: 0.9179, test_acc: 98.67%
Epoch 245 - loss: 0.9112, acc: 99.37% / test_loss: 0.9220, test_acc: 98.26%
Epoch 246 - loss: 0.9127, acc: 99.21% / test_loss: 0.9266, test_acc: 97.78%
Epoch 247 - loss: 0.9118, acc: 99.29% / test_loss: 0.9192, test_acc: 98.57%
Epoch 248 - loss: 0.9103, acc: 99.48% / test_loss: 0.9197, test_acc: 98.48%
Epoch 249 - loss: 0.9112, acc: 99.37% / test_loss: 0.9186, test_acc: 98.61%
Epoch 250 - loss: 0.9101, acc: 99.47% / test_loss: 0.9179, test_acc: 98.69%
Epoch 251 - loss: 0.9115, acc: 99.31% / test_loss: 0.9214, test_acc: 98.38%
Epoch 252 - loss: 0.9114, acc: 99.35% / test_loss: 0.9218, test_acc: 98.28%
Epoch 253 - loss: 0.9104, acc: 99.44% / test_loss: 0.9179, test_acc: 98.72%
Epoch 254 - loss: 0.9110, acc: 99.38% / test_loss: 0.9198, test_acc: 98.51%
Epoch 255 - loss: 0.9105, acc: 99.44% / test_loss: 0.9178, test_acc: 98.69%
Epoch 256 - loss: 0.9112, acc: 99.34% / test_loss: 0.9200, test_acc: 98.47%
Epoch 257 - loss: 0.9110, acc: 99.40% / test_loss: 0.9190, test_acc: 98.58%
Epoch 258 - loss: 0.9118, acc: 99.31% / test_loss: 0.9196, test_acc: 98.53%
Epoch 259 - loss: 0.9106, acc: 99.43% / test_loss: 0.9188, test_acc: 98.58%
Epoch 260 - loss: 0.9106, acc: 99.43% / test_loss: 0.9210, test_acc: 98.35%
Epoch 261 - loss: 0.9106, acc: 99.41% / test_loss: 0.9198, test_acc: 98.51%
Epoch 262 - loss: 0.9097, acc: 99.52% / test_loss: 0.9182, test_acc: 98.63%
Epoch 263 - loss: 0.9105, acc: 99.42% / test_loss: 0.9189, test_acc: 98.56%
Epoch 264 - loss: 0.9101, acc: 99.49% / test_loss: 0.9185, test_acc: 98.65%
Epoch 265 - loss: 0.9099, acc: 99.50% / test_loss: 0.9183, test_acc: 98.67%
Epoch 266 - loss: 0.9111, acc: 99.37% / test_loss: 0.9188, test_acc: 98.60%
Epoch 267 - loss: 0.9110, acc: 99.40% / test_loss: 0.9219, test_acc: 98.27%
Epoch 268 - loss: 0.9125, acc: 99.21% / test_loss: 0.9183, test_acc: 98.66%
Epoch 269 - loss: 0.9103, acc: 99.45% / test_loss: 0.9193, test_acc: 98.57%
Epoch 270 - loss: 0.9106, acc: 99.40% / test_loss: 0.9184, test_acc: 98.66%
Epoch 271 - loss: 0.9118, acc: 99.29% / test_loss: 0.9190, test_acc: 98.56%
Epoch 272 - loss: 0.9118, acc: 99.30% / test_loss: 0.9184, test_acc: 98.63%
Epoch 273 - loss: 0.9106, acc: 99.43% / test_loss: 0.9172, test_acc: 98.75%
Epoch 274 - loss: 0.9101, acc: 99.50% / test_loss: 0.9171, test_acc: 98.78%
Epoch 275 - loss: 0.9099, acc: 99.51% / test_loss: 0.9179, test_acc: 98.68%
Epoch 276 - loss: 0.9098, acc: 99.52% / test_loss: 0.9189, test_acc: 98.57%
Epoch 277 - loss: 0.9121, acc: 99.26% / test_loss: 0.9220, test_acc: 98.27%
Epoch 278 - loss: 0.9108, acc: 99.40% / test_loss: 0.9182, test_acc: 98.66%
Epoch 279 - loss: 0.9108, acc: 99.40% / test_loss: 0.9187, test_acc: 98.64%
Epoch 280 - loss: 0.9104, acc: 99.45% / test_loss: 0.9187, test_acc: 98.62%
Epoch 281 - loss: 0.9100, acc: 99.48% / test_loss: 0.9189, test_acc: 98.60%
Epoch 282 - loss: 0.9098, acc: 99.52% / test_loss: 0.9189, test_acc: 98.55%
Epoch 283 - loss: 0.9114, acc: 99.35% / test_loss: 0.9187, test_acc: 98.63%
Epoch 284 - loss: 0.9121, acc: 99.26% / test_loss: 0.9185, test_acc: 98.62%
Epoch 285 - loss: 0.9114, acc: 99.34% / test_loss: 0.9241, test_acc: 98.08%
Epoch 286 - loss: 0.9104, acc: 99.44% / test_loss: 0.9187, test_acc: 98.62%
Epoch 287 - loss: 0.9108, acc: 99.39% / test_loss: 0.9193, test_acc: 98.54%
Epoch 288 - loss: 0.9096, acc: 99.52% / test_loss: 0.9180, test_acc: 98.69%
Epoch 289 - loss: 0.9103, acc: 99.44% / test_loss: 0.9192, test_acc: 98.54%
Epoch 290 - loss: 0.9119, acc: 99.28% / test_loss: 0.9193, test_acc: 98.54%
Epoch 291 - loss: 0.9107, acc: 99.42% / test_loss: 0.9231, test_acc: 98.14%
Epoch 292 - loss: 0.9100, acc: 99.48% / test_loss: 0.9185, test_acc: 98.64%
Epoch 293 - loss: 0.9099, acc: 99.49% / test_loss: 0.9190, test_acc: 98.59%
Epoch 294 - loss: 0.9105, acc: 99.45% / test_loss: 0.9187, test_acc: 98.62%
Epoch 295 - loss: 0.9108, acc: 99.40% / test_loss: 0.9195, test_acc: 98.54%
Epoch 296 - loss: 0.9109, acc: 99.40% / test_loss: 0.9192, test_acc: 98.54%
Epoch 297 - loss: 0.9099, acc: 99.49% / test_loss: 0.9175, test_acc: 98.75%
Epoch 298 - loss: 0.9105, acc: 99.43% / test_loss: 0.9185, test_acc: 98.62%
Epoch 299 - loss: 0.9109, acc: 99.40% / test_loss: 0.9260, test_acc: 97.91%
Epoch 300 - loss: 0.9154, acc: 98.95% / test_loss: 0.9195, test_acc: 98.54%
Epoch 301 - loss: 0.9107, acc: 99.41% / test_loss: 0.9183, test_acc: 98.65%
Epoch 302 - loss: 0.9119, acc: 99.30% / test_loss: 0.9228, test_acc: 98.20%
Epoch 303 - loss: 0.9136, acc: 99.13% / test_loss: 0.9205, test_acc: 98.40%
Epoch 304 - loss: 0.9102, acc: 99.47% / test_loss: 0.9196, test_acc: 98.53%
Epoch 305 - loss: 0.9104, acc: 99.43% / test_loss: 0.9176, test_acc: 98.72%
Epoch 306 - loss: 0.9102, acc: 99.46% / test_loss: 0.9186, test_acc: 98.60%
Epoch 307 - loss: 0.9097, acc: 99.52% / test_loss: 0.9181, test_acc: 98.66%
Epoch 308 - loss: 0.9099, acc: 99.49% / test_loss: 0.9220, test_acc: 98.29%
Epoch 309 - loss: 0.9120, acc: 99.28% / test_loss: 0.9176, test_acc: 98.72%
Epoch 310 - loss: 0.9100, acc: 99.49% / test_loss: 0.9180, test_acc: 98.71%
Epoch 311 - loss: 0.9093, acc: 99.57% / test_loss: 0.9174, test_acc: 98.74%
Epoch 312 - loss: 0.9090, acc: 99.58% / test_loss: 0.9170, test_acc: 98.78%
Epoch 313 - loss: 0.9114, acc: 99.34% / test_loss: 0.9275, test_acc: 97.70%
Epoch 314 - loss: 0.9106, acc: 99.41% / test_loss: 0.9180, test_acc: 98.68%
Epoch 315 - loss: 0.9099, acc: 99.49% / test_loss: 0.9190, test_acc: 98.59%
Epoch 316 - loss: 0.9114, acc: 99.34% / test_loss: 0.9334, test_acc: 97.17%
Epoch 317 - loss: 0.9110, acc: 99.39% / test_loss: 0.9174, test_acc: 98.74%
Epoch 318 - loss: 0.9112, acc: 99.36% / test_loss: 0.9186, test_acc: 98.59%
Epoch 319 - loss: 0.9123, acc: 99.24% / test_loss: 0.9177, test_acc: 98.71%
Epoch 320 - loss: 0.9104, acc: 99.43% / test_loss: 0.9175, test_acc: 98.72%
Epoch 321 - loss: 0.9092, acc: 99.57% / test_loss: 0.9179, test_acc: 98.68%
Epoch 322 - loss: 0.9094, acc: 99.55% / test_loss: 0.9186, test_acc: 98.62%
Epoch 323 - loss: 0.9109, acc: 99.40% / test_loss: 0.9187, test_acc: 98.60%
Epoch 324 - loss: 0.9108, acc: 99.41% / test_loss: 0.9186, test_acc: 98.60%
Epoch 325 - loss: 0.9095, acc: 99.53% / test_loss: 0.9173, test_acc: 98.75%
Epoch 326 - loss: 0.9091, acc: 99.58% / test_loss: 0.9226, test_acc: 98.19%
Epoch 327 - loss: 0.9123, acc: 99.23% / test_loss: 0.9202, test_acc: 98.43%
Epoch 328 - loss: 0.9095, acc: 99.52% / test_loss: 0.9194, test_acc: 98.51%
Epoch 329 - loss: 0.9098, acc: 99.50% / test_loss: 0.9176, test_acc: 98.72%
Epoch 330 - loss: 0.9089, acc: 99.60% / test_loss: 0.9178, test_acc: 98.68%
Epoch 331 - loss: 0.9098, acc: 99.50% / test_loss: 0.9221, test_acc: 98.25%
Epoch 332 - loss: 0.9123, acc: 99.23% / test_loss: 0.9276, test_acc: 97.69%
Epoch 333 - loss: 0.9110, acc: 99.39% / test_loss: 0.9184, test_acc: 98.63%
Epoch 334 - loss: 0.9102, acc: 99.47% / test_loss: 0.9195, test_acc: 98.51%
Epoch 335 - loss: 0.9100, acc: 99.47% / test_loss: 0.9179, test_acc: 98.70%
Epoch 336 - loss: 0.9088, acc: 99.61% / test_loss: 0.9176, test_acc: 98.72%
Epoch 337 - loss: 0.9095, acc: 99.54% / test_loss: 0.9179, test_acc: 98.66%
Epoch 338 - loss: 0.9102, acc: 99.48% / test_loss: 0.9189, test_acc: 98.60%
Epoch 339 - loss: 0.9121, acc: 99.27% / test_loss: 0.9183, test_acc: 98.65%
Epoch 340 - loss: 0.9106, acc: 99.40% / test_loss: 0.9186, test_acc: 98.63%
Epoch 341 - loss: 0.9107, acc: 99.42% / test_loss: 0.9192, test_acc: 98.54%
Epoch 342 - loss: 0.9103, acc: 99.43% / test_loss: 0.9173, test_acc: 98.77%
Epoch 343 - loss: 0.9085, acc: 99.64% / test_loss: 0.9178, test_acc: 98.71%
Epoch 344 - loss: 0.9095, acc: 99.53% / test_loss: 0.9173, test_acc: 98.74%
Epoch 345 - loss: 0.9091, acc: 99.58% / test_loss: 0.9177, test_acc: 98.70%
Epoch 346 - loss: 0.9108, acc: 99.40% / test_loss: 0.9184, test_acc: 98.65%
Epoch 347 - loss: 0.9105, acc: 99.45% / test_loss: 0.9208, test_acc: 98.42%
Epoch 348 - loss: 0.9116, acc: 99.34% / test_loss: 0.9191, test_acc: 98.54%
Epoch 349 - loss: 0.9098, acc: 99.49% / test_loss: 0.9205, test_acc: 98.44%
Epoch 350 - loss: 0.9092, acc: 99.57% / test_loss: 0.9178, test_acc: 98.66%
Epoch 351 - loss: 0.9099, acc: 99.50% / test_loss: 0.9210, test_acc: 98.38%
Epoch 352 - loss: 0.9105, acc: 99.43% / test_loss: 0.9191, test_acc: 98.60%
Epoch 353 - loss: 0.9109, acc: 99.40% / test_loss: 0.9189, test_acc: 98.61%
Epoch 354 - loss: 0.9115, acc: 99.32% / test_loss: 0.9195, test_acc: 98.54%
Epoch 355 - loss: 0.9094, acc: 99.55% / test_loss: 0.9183, test_acc: 98.66%
Epoch 356 - loss: 0.9109, acc: 99.40% / test_loss: 0.9177, test_acc: 98.73%
Epoch 357 - loss: 0.9096, acc: 99.52% / test_loss: 0.9181, test_acc: 98.64%
Epoch 358 - loss: 0.9098, acc: 99.49% / test_loss: 0.9188, test_acc: 98.61%
Epoch 359 - loss: 0.9108, acc: 99.43% / test_loss: 0.9181, test_acc: 98.69%
Epoch 360 - loss: 0.9088, acc: 99.61% / test_loss: 0.9172, test_acc: 98.78%
Epoch 361 - loss: 0.9099, acc: 99.50% / test_loss: 0.9185, test_acc: 98.61%
Epoch 362 - loss: 0.9095, acc: 99.53% / test_loss: 0.9180, test_acc: 98.69%
Epoch 363 - loss: 0.9088, acc: 99.61% / test_loss: 0.9168, test_acc: 98.80%
Epoch 364 - loss: 0.9096, acc: 99.53% / test_loss: 0.9176, test_acc: 98.69%
Epoch 365 - loss: 0.9088, acc: 99.61% / test_loss: 0.9173, test_acc: 98.76%
Epoch 366 - loss: 0.9088, acc: 99.61% / test_loss: 0.9175, test_acc: 98.75%
Epoch 367 - loss: 0.9097, acc: 99.53% / test_loss: 0.9204, test_acc: 98.46%
Epoch 368 - loss: 0.9114, acc: 99.34% / test_loss: 0.9170, test_acc: 98.78%
Epoch 369 - loss: 0.9093, acc: 99.55% / test_loss: 0.9178, test_acc: 98.68%
Epoch 370 - loss: 0.9110, acc: 99.38% / test_loss: 0.9220, test_acc: 98.26%
Epoch 371 - loss: 0.9105, acc: 99.43% / test_loss: 0.9185, test_acc: 98.64%
Epoch 372 - loss: 0.9101, acc: 99.47% / test_loss: 0.9189, test_acc: 98.60%
Epoch 373 - loss: 0.9110, acc: 99.40% / test_loss: 0.9183, test_acc: 98.66%
Epoch 374 - loss: 0.9102, acc: 99.46% / test_loss: 0.9186, test_acc: 98.64%
Epoch 375 - loss: 0.9094, acc: 99.55% / test_loss: 0.9187, test_acc: 98.60%
Epoch 376 - loss: 0.9092, acc: 99.56% / test_loss: 0.9179, test_acc: 98.69%
Epoch 377 - loss: 0.9100, acc: 99.48% / test_loss: 0.9186, test_acc: 98.60%
Epoch 378 - loss: 0.9107, acc: 99.41% / test_loss: 0.9225, test_acc: 98.23%
Epoch 379 - loss: 0.9112, acc: 99.36% / test_loss: 0.9205, test_acc: 98.45%
Epoch 380 - loss: 0.9099, acc: 99.49% / test_loss: 0.9196, test_acc: 98.53%
Epoch 381 - loss: 0.9094, acc: 99.55% / test_loss: 0.9192, test_acc: 98.57%
Epoch 382 - loss: 0.9095, acc: 99.52% / test_loss: 0.9176, test_acc: 98.73%
Epoch 383 - loss: 0.9089, acc: 99.60% / test_loss: 0.9187, test_acc: 98.60%
Epoch 384 - loss: 0.9085, acc: 99.64% / test_loss: 0.9183, test_acc: 98.64%
Epoch 385 - loss: 0.9087, acc: 99.62% / test_loss: 0.9173, test_acc: 98.73%
Epoch 386 - loss: 0.9111, acc: 99.37% / test_loss: 0.9216, test_acc: 98.29%
Epoch 387 - loss: 0.9094, acc: 99.55% / test_loss: 0.9181, test_acc: 98.66%
Epoch 388 - loss: 0.9096, acc: 99.52% / test_loss: 0.9180, test_acc: 98.66%
Epoch 389 - loss: 0.9089, acc: 99.59% / test_loss: 0.9180, test_acc: 98.68%
Epoch 390 - loss: 0.9088, acc: 99.61% / test_loss: 0.9202, test_acc: 98.46%
Epoch 391 - loss: 0.9088, acc: 99.61% / test_loss: 0.9173, test_acc: 98.75%
Epoch 392 - loss: 0.9094, acc: 99.53% / test_loss: 0.9186, test_acc: 98.60%
Epoch 393 - loss: 0.9100, acc: 99.47% / test_loss: 0.9180, test_acc: 98.68%
Epoch 394 - loss: 0.9098, acc: 99.51% / test_loss: 0.9178, test_acc: 98.67%
Epoch 395 - loss: 0.9097, acc: 99.50% / test_loss: 0.9189, test_acc: 98.57%
Epoch 396 - loss: 0.9094, acc: 99.52% / test_loss: 0.9190, test_acc: 98.59%
Epoch 397 - loss: 0.9113, acc: 99.34% / test_loss: 0.9167, test_acc: 98.81%
Epoch 398 - loss: 0.9092, acc: 99.56% / test_loss: 0.9171, test_acc: 98.76%
Epoch 399 - loss: 0.9105, acc: 99.43% / test_loss: 0.9181, test_acc: 98.69%
Epoch 400 - loss: 0.9094, acc: 99.54% / test_loss: 0.9179, test_acc: 98.69%
Best test accuracy 98.81% in epoch 397.
----------------------------------------------------------------------------------------------------
Run 5
Epoch 1 - loss: 1.3534, acc: 54.99% / test_loss: 1.2151, test_acc: 69.57%
Epoch 2 - loss: 1.1385, acc: 77.43% / test_loss: 1.0894, test_acc: 82.23%
Epoch 3 - loss: 1.0746, acc: 83.60% / test_loss: 1.0468, test_acc: 86.39%
Epoch 4 - loss: 1.0475, acc: 86.17% / test_loss: 1.0395, test_acc: 86.95%
Epoch 5 - loss: 1.0394, acc: 86.79% / test_loss: 1.0265, test_acc: 87.97%
Epoch 6 - loss: 1.0334, acc: 87.23% / test_loss: 1.0264, test_acc: 88.11%
Epoch 7 - loss: 1.0310, acc: 87.54% / test_loss: 1.0187, test_acc: 88.66%
Epoch 8 - loss: 1.0264, acc: 87.82% / test_loss: 1.0147, test_acc: 88.95%
Epoch 9 - loss: 1.0246, acc: 88.00% / test_loss: 1.0142, test_acc: 89.17%
Epoch 10 - loss: 1.0216, acc: 88.37% / test_loss: 1.0122, test_acc: 89.27%
Epoch 11 - loss: 1.0211, acc: 88.35% / test_loss: 1.0137, test_acc: 89.04%
Epoch 12 - loss: 1.0181, acc: 88.55% / test_loss: 1.0105, test_acc: 89.35%
Epoch 13 - loss: 1.0184, acc: 88.61% / test_loss: 1.0208, test_acc: 88.61%
Epoch 14 - loss: 1.0148, acc: 88.96% / test_loss: 1.0083, test_acc: 89.61%
Epoch 15 - loss: 1.0145, acc: 88.94% / test_loss: 1.0099, test_acc: 89.51%
Epoch 16 - loss: 1.0132, acc: 89.08% / test_loss: 1.0067, test_acc: 89.68%
Epoch 17 - loss: 1.0123, acc: 89.19% / test_loss: 1.0069, test_acc: 89.70%
Epoch 18 - loss: 1.0115, acc: 89.19% / test_loss: 1.0070, test_acc: 89.67%
Epoch 19 - loss: 1.0128, acc: 89.08% / test_loss: 1.0058, test_acc: 89.84%
Epoch 20 - loss: 1.0109, acc: 89.25% / test_loss: 1.0043, test_acc: 89.85%
Epoch 21 - loss: 1.0090, acc: 89.41% / test_loss: 1.0022, test_acc: 90.12%
Epoch 22 - loss: 1.0079, acc: 89.57% / test_loss: 1.0028, test_acc: 90.04%
Epoch 23 - loss: 1.0074, acc: 89.69% / test_loss: 1.0045, test_acc: 89.77%
Epoch 24 - loss: 1.0079, acc: 89.57% / test_loss: 1.0088, test_acc: 89.45%
Epoch 25 - loss: 1.0081, acc: 89.51% / test_loss: 1.0027, test_acc: 89.96%
Epoch 26 - loss: 1.0055, acc: 89.74% / test_loss: 1.0036, test_acc: 89.92%
Epoch 27 - loss: 1.0076, acc: 89.56% / test_loss: 1.0054, test_acc: 89.81%
Epoch 28 - loss: 1.0046, acc: 89.87% / test_loss: 1.0035, test_acc: 89.98%
Epoch 29 - loss: 1.0049, acc: 89.84% / test_loss: 1.0005, test_acc: 90.25%
Epoch 30 - loss: 1.0059, acc: 89.73% / test_loss: 1.0029, test_acc: 90.04%
Epoch 31 - loss: 1.0060, acc: 89.75% / test_loss: 1.0031, test_acc: 89.97%
Epoch 32 - loss: 1.0041, acc: 89.85% / test_loss: 1.0022, test_acc: 90.07%
Epoch 33 - loss: 1.0052, acc: 89.82% / test_loss: 1.0063, test_acc: 89.60%
Epoch 34 - loss: 1.0037, acc: 89.92% / test_loss: 1.0008, test_acc: 90.22%
Epoch 35 - loss: 1.0054, acc: 89.74% / test_loss: 0.9993, test_acc: 90.28%
Epoch 36 - loss: 1.0039, acc: 89.85% / test_loss: 1.0014, test_acc: 90.15%
Epoch 37 - loss: 1.0040, acc: 89.86% / test_loss: 1.0004, test_acc: 90.25%
Epoch 38 - loss: 1.0038, acc: 89.89% / test_loss: 1.0044, test_acc: 89.79%
Epoch 39 - loss: 1.0041, acc: 89.87% / test_loss: 1.0021, test_acc: 90.11%
Epoch 40 - loss: 1.0026, acc: 89.97% / test_loss: 1.0000, test_acc: 90.22%
Epoch 41 - loss: 1.0041, acc: 89.91% / test_loss: 1.0006, test_acc: 90.21%
Epoch 42 - loss: 1.0035, acc: 89.94% / test_loss: 1.0025, test_acc: 90.07%
Epoch 43 - loss: 1.0027, acc: 90.02% / test_loss: 1.0001, test_acc: 90.24%
Epoch 44 - loss: 1.0053, acc: 89.72% / test_loss: 1.0011, test_acc: 90.15%
Epoch 45 - loss: 1.0025, acc: 90.00% / test_loss: 0.9992, test_acc: 90.33%
Epoch 46 - loss: 1.0031, acc: 89.92% / test_loss: 0.9981, test_acc: 90.39%
Epoch 47 - loss: 1.0014, acc: 90.08% / test_loss: 0.9998, test_acc: 90.25%
Epoch 48 - loss: 1.0021, acc: 90.00% / test_loss: 1.0079, test_acc: 89.63%
Epoch 49 - loss: 1.0029, acc: 89.96% / test_loss: 1.0013, test_acc: 90.25%
Epoch 50 - loss: 1.0019, acc: 90.06% / test_loss: 0.9987, test_acc: 90.32%
Epoch 51 - loss: 1.0034, acc: 89.92% / test_loss: 0.9987, test_acc: 90.34%
Epoch 52 - loss: 1.0019, acc: 90.03% / test_loss: 0.9992, test_acc: 90.28%
Epoch 53 - loss: 1.0019, acc: 90.06% / test_loss: 1.0011, test_acc: 90.07%
Epoch 54 - loss: 1.0017, acc: 90.12% / test_loss: 0.9981, test_acc: 90.43%
Epoch 55 - loss: 1.0014, acc: 90.10% / test_loss: 0.9991, test_acc: 90.37%
Epoch 56 - loss: 1.0015, acc: 90.06% / test_loss: 0.9990, test_acc: 90.30%
Epoch 57 - loss: 1.0000, acc: 90.25% / test_loss: 0.9969, test_acc: 90.46%
Epoch 58 - loss: 1.0005, acc: 90.14% / test_loss: 0.9979, test_acc: 90.37%
Epoch 59 - loss: 0.9991, acc: 90.28% / test_loss: 0.9968, test_acc: 90.53%
Epoch 60 - loss: 1.0003, acc: 90.22% / test_loss: 0.9974, test_acc: 90.46%
Epoch 61 - loss: 0.9986, acc: 90.36% / test_loss: 0.9952, test_acc: 90.66%
Epoch 62 - loss: 0.9973, acc: 90.45% / test_loss: 0.9979, test_acc: 90.40%
Epoch 63 - loss: 0.9977, acc: 90.44% / test_loss: 0.9950, test_acc: 90.68%
Epoch 64 - loss: 0.9979, acc: 90.43% / test_loss: 0.9966, test_acc: 90.55%
Epoch 65 - loss: 0.9977, acc: 90.43% / test_loss: 0.9976, test_acc: 90.46%
Epoch 66 - loss: 0.9978, acc: 90.43% / test_loss: 0.9955, test_acc: 90.68%
Epoch 67 - loss: 0.9973, acc: 90.48% / test_loss: 0.9961, test_acc: 90.59%
Epoch 68 - loss: 0.9969, acc: 90.50% / test_loss: 0.9963, test_acc: 90.59%
Epoch 69 - loss: 0.9960, acc: 90.64% / test_loss: 0.9943, test_acc: 90.88%
Epoch 70 - loss: 0.9948, acc: 90.71% / test_loss: 0.9902, test_acc: 91.17%
Epoch 71 - loss: 0.9882, acc: 91.40% / test_loss: 0.9857, test_acc: 91.60%
Epoch 72 - loss: 0.9862, acc: 91.57% / test_loss: 0.9941, test_acc: 90.94%
Epoch 73 - loss: 0.9840, acc: 91.82% / test_loss: 0.9851, test_acc: 91.74%
Epoch 74 - loss: 0.9826, acc: 91.95% / test_loss: 0.9836, test_acc: 91.88%
Epoch 75 - loss: 0.9816, acc: 92.02% / test_loss: 0.9828, test_acc: 91.94%
Epoch 76 - loss: 0.9796, acc: 92.26% / test_loss: 0.9824, test_acc: 91.92%
Epoch 77 - loss: 0.9796, acc: 92.22% / test_loss: 0.9837, test_acc: 91.78%
Epoch 78 - loss: 0.9790, acc: 92.30% / test_loss: 0.9852, test_acc: 91.67%
Epoch 79 - loss: 0.9775, acc: 92.43% / test_loss: 0.9809, test_acc: 92.13%
Epoch 80 - loss: 0.9773, acc: 92.51% / test_loss: 0.9820, test_acc: 92.00%
Epoch 81 - loss: 0.9763, acc: 92.53% / test_loss: 0.9800, test_acc: 92.21%
Epoch 82 - loss: 0.9780, acc: 92.37% / test_loss: 0.9798, test_acc: 92.22%
Epoch 83 - loss: 0.9753, acc: 92.65% / test_loss: 0.9797, test_acc: 92.14%
Epoch 84 - loss: 0.9754, acc: 92.61% / test_loss: 0.9789, test_acc: 92.27%
Epoch 85 - loss: 0.9769, acc: 92.46% / test_loss: 0.9787, test_acc: 92.29%
Epoch 86 - loss: 0.9753, acc: 92.62% / test_loss: 0.9807, test_acc: 92.13%
Epoch 87 - loss: 0.9749, acc: 92.68% / test_loss: 0.9777, test_acc: 92.36%
Epoch 88 - loss: 0.9738, acc: 92.80% / test_loss: 0.9771, test_acc: 92.44%
Epoch 89 - loss: 0.9747, acc: 92.75% / test_loss: 0.9782, test_acc: 92.34%
Epoch 90 - loss: 0.9754, acc: 92.63% / test_loss: 0.9776, test_acc: 92.40%
Epoch 91 - loss: 0.9756, acc: 92.59% / test_loss: 0.9778, test_acc: 92.35%
Epoch 92 - loss: 0.9735, acc: 92.80% / test_loss: 0.9770, test_acc: 92.45%
Epoch 93 - loss: 0.9736, acc: 92.80% / test_loss: 0.9775, test_acc: 92.36%
Epoch 94 - loss: 0.9737, acc: 92.81% / test_loss: 0.9779, test_acc: 92.39%
Epoch 95 - loss: 0.9737, acc: 92.77% / test_loss: 0.9788, test_acc: 92.29%
Epoch 96 - loss: 0.9731, acc: 92.87% / test_loss: 0.9783, test_acc: 92.31%
Epoch 97 - loss: 0.9742, acc: 92.75% / test_loss: 0.9770, test_acc: 92.49%
Epoch 98 - loss: 0.9742, acc: 92.74% / test_loss: 0.9775, test_acc: 92.40%
Epoch 99 - loss: 0.9728, acc: 92.91% / test_loss: 0.9771, test_acc: 92.44%
Epoch 100 - loss: 0.9724, acc: 92.94% / test_loss: 0.9761, test_acc: 92.53%
Epoch 101 - loss: 0.9722, acc: 92.94% / test_loss: 0.9789, test_acc: 92.29%
Epoch 102 - loss: 0.9719, acc: 92.96% / test_loss: 0.9764, test_acc: 92.53%
Epoch 103 - loss: 0.9712, acc: 93.02% / test_loss: 0.9750, test_acc: 92.65%
Epoch 104 - loss: 0.9727, acc: 92.88% / test_loss: 0.9769, test_acc: 92.50%
Epoch 105 - loss: 0.9729, acc: 92.86% / test_loss: 0.9762, test_acc: 92.55%
Epoch 106 - loss: 0.9725, acc: 92.90% / test_loss: 0.9747, test_acc: 92.66%
Epoch 107 - loss: 0.9724, acc: 92.93% / test_loss: 0.9760, test_acc: 92.58%
Epoch 108 - loss: 0.9721, acc: 92.94% / test_loss: 0.9766, test_acc: 92.48%
Epoch 109 - loss: 0.9716, acc: 92.97% / test_loss: 0.9772, test_acc: 92.39%
Epoch 110 - loss: 0.9725, acc: 92.86% / test_loss: 0.9824, test_acc: 91.94%
Epoch 111 - loss: 0.9725, acc: 92.89% / test_loss: 0.9748, test_acc: 92.65%
Epoch 112 - loss: 0.9703, acc: 93.10% / test_loss: 0.9756, test_acc: 92.56%
Epoch 113 - loss: 0.9716, acc: 92.99% / test_loss: 0.9764, test_acc: 92.54%
Epoch 114 - loss: 0.9716, acc: 92.97% / test_loss: 0.9778, test_acc: 92.49%
Epoch 115 - loss: 0.9713, acc: 93.02% / test_loss: 0.9744, test_acc: 92.71%
Epoch 116 - loss: 0.9710, acc: 93.07% / test_loss: 0.9759, test_acc: 92.53%
Epoch 117 - loss: 0.9722, acc: 92.89% / test_loss: 0.9755, test_acc: 92.59%
Epoch 118 - loss: 0.9714, acc: 93.01% / test_loss: 0.9768, test_acc: 92.45%
Epoch 119 - loss: 0.9712, acc: 93.05% / test_loss: 0.9747, test_acc: 92.68%
Epoch 120 - loss: 0.9702, acc: 93.10% / test_loss: 0.9745, test_acc: 92.64%
Epoch 121 - loss: 0.9706, acc: 93.06% / test_loss: 0.9736, test_acc: 92.81%
Epoch 122 - loss: 0.9732, acc: 92.79% / test_loss: 0.9776, test_acc: 92.43%
Epoch 123 - loss: 0.9710, acc: 93.03% / test_loss: 0.9750, test_acc: 92.65%
Epoch 124 - loss: 0.9712, acc: 93.05% / test_loss: 0.9738, test_acc: 92.74%
Epoch 125 - loss: 0.9709, acc: 93.04% / test_loss: 0.9750, test_acc: 92.62%
Epoch 126 - loss: 0.9706, acc: 93.05% / test_loss: 0.9755, test_acc: 92.56%
Epoch 127 - loss: 0.9694, acc: 93.17% / test_loss: 0.9769, test_acc: 92.43%
Epoch 128 - loss: 0.9709, acc: 93.03% / test_loss: 0.9755, test_acc: 92.56%
Epoch 129 - loss: 0.9691, acc: 93.24% / test_loss: 0.9733, test_acc: 92.79%
Epoch 130 - loss: 0.9704, acc: 93.10% / test_loss: 0.9740, test_acc: 92.75%
Epoch 131 - loss: 0.9711, acc: 93.02% / test_loss: 0.9749, test_acc: 92.61%
Epoch 132 - loss: 0.9702, acc: 93.10% / test_loss: 0.9755, test_acc: 92.57%
Epoch 133 - loss: 0.9700, acc: 93.12% / test_loss: 0.9745, test_acc: 92.65%
Epoch 134 - loss: 0.9688, acc: 93.27% / test_loss: 0.9737, test_acc: 92.75%
Epoch 135 - loss: 0.9685, acc: 93.26% / test_loss: 0.9777, test_acc: 92.34%
Epoch 136 - loss: 0.9701, acc: 93.14% / test_loss: 0.9734, test_acc: 92.79%
Epoch 137 - loss: 0.9695, acc: 93.17% / test_loss: 0.9734, test_acc: 92.80%
Epoch 138 - loss: 0.9698, acc: 93.12% / test_loss: 0.9761, test_acc: 92.55%
Epoch 139 - loss: 0.9704, acc: 93.06% / test_loss: 0.9756, test_acc: 92.61%
Epoch 140 - loss: 0.9688, acc: 93.24% / test_loss: 0.9759, test_acc: 92.55%
Epoch 141 - loss: 0.9721, acc: 92.91% / test_loss: 0.9760, test_acc: 92.55%
Epoch 142 - loss: 0.9710, acc: 93.02% / test_loss: 0.9747, test_acc: 92.63%
Epoch 143 - loss: 0.9698, acc: 93.15% / test_loss: 0.9774, test_acc: 92.40%
Epoch 144 - loss: 0.9703, acc: 93.09% / test_loss: 0.9742, test_acc: 92.67%
Epoch 145 - loss: 0.9690, acc: 93.22% / test_loss: 0.9744, test_acc: 92.66%
Epoch 146 - loss: 0.9697, acc: 93.17% / test_loss: 0.9735, test_acc: 92.77%
Epoch 147 - loss: 0.9690, acc: 93.24% / test_loss: 0.9729, test_acc: 92.82%
Epoch 148 - loss: 0.9686, acc: 93.25% / test_loss: 0.9752, test_acc: 92.62%
Epoch 149 - loss: 0.9698, acc: 93.14% / test_loss: 0.9757, test_acc: 92.60%
Epoch 150 - loss: 0.9701, acc: 93.11% / test_loss: 0.9747, test_acc: 92.65%
Epoch 151 - loss: 0.9700, acc: 93.12% / test_loss: 0.9736, test_acc: 92.74%
Epoch 152 - loss: 0.9688, acc: 93.24% / test_loss: 0.9743, test_acc: 92.71%
Epoch 153 - loss: 0.9687, acc: 93.24% / test_loss: 0.9732, test_acc: 92.78%
Epoch 154 - loss: 0.9683, acc: 93.29% / test_loss: 0.9739, test_acc: 92.71%
Epoch 155 - loss: 0.9692, acc: 93.20% / test_loss: 0.9744, test_acc: 92.71%
Epoch 156 - loss: 0.9679, acc: 93.33% / test_loss: 0.9745, test_acc: 92.66%
Epoch 157 - loss: 0.9697, acc: 93.14% / test_loss: 0.9750, test_acc: 92.61%
Epoch 158 - loss: 0.9689, acc: 93.24% / test_loss: 0.9736, test_acc: 92.77%
Epoch 159 - loss: 0.9680, acc: 93.30% / test_loss: 0.9745, test_acc: 92.68%
Epoch 160 - loss: 0.9704, acc: 93.06% / test_loss: 0.9749, test_acc: 92.63%
Epoch 161 - loss: 0.9703, acc: 93.09% / test_loss: 0.9741, test_acc: 92.70%
Epoch 162 - loss: 0.9723, acc: 92.93% / test_loss: 0.9761, test_acc: 92.50%
Epoch 163 - loss: 0.9703, acc: 93.10% / test_loss: 0.9766, test_acc: 92.44%
Epoch 164 - loss: 0.9686, acc: 93.25% / test_loss: 0.9735, test_acc: 92.74%
Epoch 165 - loss: 0.9696, acc: 93.17% / test_loss: 0.9732, test_acc: 92.82%
Epoch 166 - loss: 0.9686, acc: 93.27% / test_loss: 0.9748, test_acc: 92.63%
Epoch 167 - loss: 0.9687, acc: 93.24% / test_loss: 0.9739, test_acc: 92.69%
Epoch 168 - loss: 0.9675, acc: 93.35% / test_loss: 0.9730, test_acc: 92.80%
Epoch 169 - loss: 0.9692, acc: 93.20% / test_loss: 0.9756, test_acc: 92.58%
Epoch 170 - loss: 0.9703, acc: 93.08% / test_loss: 0.9743, test_acc: 92.68%
Epoch 171 - loss: 0.9692, acc: 93.19% / test_loss: 0.9742, test_acc: 92.68%
Epoch 172 - loss: 0.9687, acc: 93.22% / test_loss: 0.9791, test_acc: 92.23%
Epoch 173 - loss: 0.9694, acc: 93.17% / test_loss: 0.9737, test_acc: 92.74%
Epoch 174 - loss: 0.9687, acc: 93.23% / test_loss: 0.9737, test_acc: 92.74%
Epoch 175 - loss: 0.9693, acc: 93.16% / test_loss: 0.9765, test_acc: 92.49%
Epoch 176 - loss: 0.9702, acc: 93.07% / test_loss: 0.9745, test_acc: 92.67%
Epoch 177 - loss: 0.9688, acc: 93.24% / test_loss: 0.9747, test_acc: 92.65%
Epoch 178 - loss: 0.9694, acc: 93.16% / test_loss: 0.9738, test_acc: 92.73%
Epoch 179 - loss: 0.9683, acc: 93.27% / test_loss: 0.9748, test_acc: 92.59%
Epoch 180 - loss: 0.9683, acc: 93.27% / test_loss: 0.9724, test_acc: 92.85%
Epoch 181 - loss: 0.9688, acc: 93.21% / test_loss: 0.9761, test_acc: 92.50%
Epoch 182 - loss: 0.9700, acc: 93.13% / test_loss: 0.9746, test_acc: 92.68%
Epoch 183 - loss: 0.9685, acc: 93.25% / test_loss: 0.9745, test_acc: 92.69%
Epoch 184 - loss: 0.9689, acc: 93.21% / test_loss: 0.9728, test_acc: 92.84%
Epoch 185 - loss: 0.9674, acc: 93.36% / test_loss: 0.9729, test_acc: 92.79%
Epoch 186 - loss: 0.9696, acc: 93.14% / test_loss: 0.9841, test_acc: 91.72%
Epoch 187 - loss: 0.9709, acc: 92.99% / test_loss: 0.9732, test_acc: 92.78%
Epoch 188 - loss: 0.9692, acc: 93.20% / test_loss: 0.9726, test_acc: 92.83%
Epoch 189 - loss: 0.9690, acc: 93.20% / test_loss: 0.9732, test_acc: 92.81%
Epoch 190 - loss: 0.9676, acc: 93.33% / test_loss: 0.9736, test_acc: 92.76%
Epoch 191 - loss: 0.9688, acc: 93.21% / test_loss: 0.9730, test_acc: 92.81%
Epoch 192 - loss: 0.9683, acc: 93.29% / test_loss: 0.9735, test_acc: 92.75%
Epoch 193 - loss: 0.9695, acc: 93.14% / test_loss: 0.9744, test_acc: 92.68%
Epoch 194 - loss: 0.9688, acc: 93.23% / test_loss: 0.9761, test_acc: 92.50%
Epoch 195 - loss: 0.9682, acc: 93.28% / test_loss: 0.9727, test_acc: 92.82%
Epoch 196 - loss: 0.9672, acc: 93.38% / test_loss: 0.9720, test_acc: 92.92%
Epoch 197 - loss: 0.9678, acc: 93.32% / test_loss: 0.9742, test_acc: 92.74%
Epoch 198 - loss: 0.9692, acc: 93.19% / test_loss: 0.9762, test_acc: 92.49%
Epoch 199 - loss: 0.9692, acc: 93.19% / test_loss: 0.9738, test_acc: 92.72%
Epoch 200 - loss: 0.9675, acc: 93.35% / test_loss: 0.9735, test_acc: 92.71%
Epoch 201 - loss: 0.9689, acc: 93.20% / test_loss: 0.9739, test_acc: 92.71%
Epoch 202 - loss: 0.9695, acc: 93.17% / test_loss: 0.9738, test_acc: 92.71%
Epoch 203 - loss: 0.9694, acc: 93.16% / test_loss: 0.9761, test_acc: 92.55%
Epoch 204 - loss: 0.9680, acc: 93.32% / test_loss: 0.9713, test_acc: 92.97%
Epoch 205 - loss: 0.9671, acc: 93.40% / test_loss: 0.9708, test_acc: 93.04%
Epoch 206 - loss: 0.9659, acc: 93.51% / test_loss: 0.9717, test_acc: 92.94%
Epoch 207 - loss: 0.9667, acc: 93.42% / test_loss: 0.9716, test_acc: 92.94%
Epoch 208 - loss: 0.9678, acc: 93.33% / test_loss: 0.9700, test_acc: 93.08%
Epoch 209 - loss: 0.9665, acc: 93.45% / test_loss: 0.9718, test_acc: 92.94%
Epoch 210 - loss: 0.9659, acc: 93.51% / test_loss: 0.9767, test_acc: 92.47%
Epoch 211 - loss: 0.9662, acc: 93.51% / test_loss: 0.9713, test_acc: 92.99%
Epoch 212 - loss: 0.9662, acc: 93.51% / test_loss: 0.9722, test_acc: 92.89%
Epoch 213 - loss: 0.9666, acc: 93.44% / test_loss: 0.9700, test_acc: 93.12%
Epoch 214 - loss: 0.9660, acc: 93.51% / test_loss: 0.9777, test_acc: 92.32%
Epoch 215 - loss: 0.9689, acc: 93.24% / test_loss: 0.9728, test_acc: 92.89%
Epoch 216 - loss: 0.9676, acc: 93.37% / test_loss: 0.9698, test_acc: 93.14%
Epoch 217 - loss: 0.9651, acc: 93.60% / test_loss: 0.9699, test_acc: 93.14%
Epoch 218 - loss: 0.9653, acc: 93.57% / test_loss: 0.9695, test_acc: 93.17%
Epoch 219 - loss: 0.9678, acc: 93.33% / test_loss: 0.9702, test_acc: 93.11%
Epoch 220 - loss: 0.9666, acc: 93.44% / test_loss: 0.9721, test_acc: 92.88%
Epoch 221 - loss: 0.9658, acc: 93.51% / test_loss: 0.9705, test_acc: 93.09%
Epoch 222 - loss: 0.9666, acc: 93.45% / test_loss: 0.9711, test_acc: 92.98%
Epoch 223 - loss: 0.9659, acc: 93.51% / test_loss: 0.9715, test_acc: 92.94%
Epoch 224 - loss: 0.9657, acc: 93.53% / test_loss: 0.9702, test_acc: 93.08%
Epoch 225 - loss: 0.9652, acc: 93.58% / test_loss: 0.9694, test_acc: 93.19%
Epoch 226 - loss: 0.9660, acc: 93.50% / test_loss: 0.9717, test_acc: 92.91%
Epoch 227 - loss: 0.9667, acc: 93.44% / test_loss: 0.9730, test_acc: 92.83%
Epoch 228 - loss: 0.9665, acc: 93.46% / test_loss: 0.9699, test_acc: 93.10%
Epoch 229 - loss: 0.9656, acc: 93.56% / test_loss: 0.9714, test_acc: 92.96%
Epoch 230 - loss: 0.9651, acc: 93.60% / test_loss: 0.9692, test_acc: 93.19%
Epoch 231 - loss: 0.9638, acc: 93.73% / test_loss: 0.9693, test_acc: 93.15%
Epoch 232 - loss: 0.9658, acc: 93.53% / test_loss: 0.9701, test_acc: 93.09%
Epoch 233 - loss: 0.9658, acc: 93.52% / test_loss: 0.9706, test_acc: 93.07%
Epoch 234 - loss: 0.9648, acc: 93.61% / test_loss: 0.9686, test_acc: 93.27%
Epoch 235 - loss: 0.9640, acc: 93.69% / test_loss: 0.9690, test_acc: 93.20%
Epoch 236 - loss: 0.9652, acc: 93.57% / test_loss: 0.9714, test_acc: 93.00%
Epoch 237 - loss: 0.9643, acc: 93.68% / test_loss: 0.9693, test_acc: 93.17%
Epoch 238 - loss: 0.9657, acc: 93.54% / test_loss: 0.9697, test_acc: 93.12%
Epoch 239 - loss: 0.9642, acc: 93.67% / test_loss: 0.9693, test_acc: 93.19%
Epoch 240 - loss: 0.9639, acc: 93.70% / test_loss: 0.9707, test_acc: 93.05%
Epoch 241 - loss: 0.9677, acc: 93.35% / test_loss: 0.9704, test_acc: 93.08%
Epoch 242 - loss: 0.9648, acc: 93.61% / test_loss: 0.9719, test_acc: 92.94%
Epoch 243 - loss: 0.9655, acc: 93.56% / test_loss: 0.9695, test_acc: 93.19%
Epoch 244 - loss: 0.9647, acc: 93.66% / test_loss: 0.9690, test_acc: 93.22%
Epoch 245 - loss: 0.9651, acc: 93.60% / test_loss: 0.9702, test_acc: 93.08%
Epoch 246 - loss: 0.9649, acc: 93.61% / test_loss: 0.9708, test_acc: 93.01%
Epoch 247 - loss: 0.9643, acc: 93.68% / test_loss: 0.9684, test_acc: 93.30%
Epoch 248 - loss: 0.9639, acc: 93.71% / test_loss: 0.9695, test_acc: 93.18%
Epoch 249 - loss: 0.9640, acc: 93.70% / test_loss: 0.9704, test_acc: 93.06%
Epoch 250 - loss: 0.9651, acc: 93.57% / test_loss: 0.9694, test_acc: 93.20%
Epoch 251 - loss: 0.9667, acc: 93.42% / test_loss: 0.9732, test_acc: 92.75%
Epoch 252 - loss: 0.9645, acc: 93.66% / test_loss: 0.9696, test_acc: 93.15%
Epoch 253 - loss: 0.9640, acc: 93.71% / test_loss: 0.9708, test_acc: 93.01%
Epoch 254 - loss: 0.9651, acc: 93.59% / test_loss: 0.9679, test_acc: 93.29%
Epoch 255 - loss: 0.9643, acc: 93.67% / test_loss: 0.9690, test_acc: 93.19%
Epoch 256 - loss: 0.9673, acc: 93.37% / test_loss: 0.9693, test_acc: 93.18%
Epoch 257 - loss: 0.9644, acc: 93.67% / test_loss: 0.9705, test_acc: 93.03%
Epoch 258 - loss: 0.9654, acc: 93.54% / test_loss: 0.9694, test_acc: 93.17%
Epoch 259 - loss: 0.9642, acc: 93.67% / test_loss: 0.9681, test_acc: 93.29%
Epoch 260 - loss: 0.9638, acc: 93.72% / test_loss: 0.9692, test_acc: 93.19%
Epoch 261 - loss: 0.9648, acc: 93.64% / test_loss: 0.9692, test_acc: 93.17%
Epoch 262 - loss: 0.9640, acc: 93.71% / test_loss: 0.9724, test_acc: 92.82%
Epoch 263 - loss: 0.9650, acc: 93.59% / test_loss: 0.9705, test_acc: 93.08%
Epoch 264 - loss: 0.9638, acc: 93.72% / test_loss: 0.9687, test_acc: 93.24%
Epoch 265 - loss: 0.9631, acc: 93.79% / test_loss: 0.9680, test_acc: 93.31%
Epoch 266 - loss: 0.9631, acc: 93.79% / test_loss: 0.9685, test_acc: 93.22%
Epoch 267 - loss: 0.9640, acc: 93.71% / test_loss: 0.9688, test_acc: 93.22%
Epoch 268 - loss: 0.9641, acc: 93.69% / test_loss: 0.9694, test_acc: 93.15%
Epoch 269 - loss: 0.9639, acc: 93.70% / test_loss: 0.9684, test_acc: 93.27%
Epoch 270 - loss: 0.9646, acc: 93.64% / test_loss: 0.9700, test_acc: 93.12%
Epoch 271 - loss: 0.9641, acc: 93.70% / test_loss: 0.9689, test_acc: 93.26%
Epoch 272 - loss: 0.9639, acc: 93.71% / test_loss: 0.9682, test_acc: 93.31%
Epoch 273 - loss: 0.9646, acc: 93.65% / test_loss: 0.9706, test_acc: 93.11%
Epoch 274 - loss: 0.9654, acc: 93.56% / test_loss: 0.9705, test_acc: 93.04%
Epoch 275 - loss: 0.9644, acc: 93.66% / test_loss: 0.9700, test_acc: 93.10%
Epoch 276 - loss: 0.9641, acc: 93.70% / test_loss: 0.9716, test_acc: 92.97%
Epoch 277 - loss: 0.9653, acc: 93.56% / test_loss: 0.9687, test_acc: 93.24%
Epoch 278 - loss: 0.9632, acc: 93.78% / test_loss: 0.9681, test_acc: 93.31%
Epoch 279 - loss: 0.9634, acc: 93.76% / test_loss: 0.9687, test_acc: 93.25%
Epoch 280 - loss: 0.9638, acc: 93.74% / test_loss: 0.9691, test_acc: 93.20%
Epoch 281 - loss: 0.9650, acc: 93.60% / test_loss: 0.9685, test_acc: 93.26%
Epoch 282 - loss: 0.9643, acc: 93.67% / test_loss: 0.9686, test_acc: 93.25%
Epoch 283 - loss: 0.9630, acc: 93.80% / test_loss: 0.9685, test_acc: 93.25%
Epoch 284 - loss: 0.9638, acc: 93.73% / test_loss: 0.9694, test_acc: 93.18%
Epoch 285 - loss: 0.9642, acc: 93.69% / test_loss: 0.9681, test_acc: 93.31%
Epoch 286 - loss: 0.9632, acc: 93.77% / test_loss: 0.9698, test_acc: 93.15%
Epoch 287 - loss: 0.9633, acc: 93.76% / test_loss: 0.9707, test_acc: 93.05%
Epoch 288 - loss: 0.9631, acc: 93.79% / test_loss: 0.9703, test_acc: 93.08%
Epoch 289 - loss: 0.9637, acc: 93.74% / test_loss: 0.9697, test_acc: 93.15%
Epoch 290 - loss: 0.9636, acc: 93.75% / test_loss: 0.9690, test_acc: 93.20%
Epoch 291 - loss: 0.9644, acc: 93.64% / test_loss: 0.9692, test_acc: 93.21%
Epoch 292 - loss: 0.9629, acc: 93.82% / test_loss: 0.9701, test_acc: 93.08%
Epoch 293 - loss: 0.9629, acc: 93.79% / test_loss: 0.9696, test_acc: 93.14%
Epoch 294 - loss: 0.9630, acc: 93.79% / test_loss: 0.9674, test_acc: 93.38%
Epoch 295 - loss: 0.9641, acc: 93.68% / test_loss: 0.9684, test_acc: 93.24%
Epoch 296 - loss: 0.9619, acc: 93.89% / test_loss: 0.9676, test_acc: 93.33%
Epoch 297 - loss: 0.9624, acc: 93.88% / test_loss: 0.9688, test_acc: 93.23%
Epoch 298 - loss: 0.9623, acc: 93.85% / test_loss: 0.9678, test_acc: 93.27%
Epoch 299 - loss: 0.9625, acc: 93.85% / test_loss: 0.9676, test_acc: 93.33%
Epoch 300 - loss: 0.9628, acc: 93.83% / test_loss: 0.9682, test_acc: 93.27%
Epoch 301 - loss: 0.9636, acc: 93.74% / test_loss: 0.9724, test_acc: 92.90%
Epoch 302 - loss: 0.9644, acc: 93.66% / test_loss: 0.9682, test_acc: 93.27%
Epoch 303 - loss: 0.9631, acc: 93.78% / test_loss: 0.9686, test_acc: 93.27%
Epoch 304 - loss: 0.9639, acc: 93.74% / test_loss: 0.9718, test_acc: 92.92%
Epoch 305 - loss: 0.9627, acc: 93.83% / test_loss: 0.9680, test_acc: 93.30%
Epoch 306 - loss: 0.9627, acc: 93.82% / test_loss: 0.9684, test_acc: 93.24%
Epoch 307 - loss: 0.9622, acc: 93.87% / test_loss: 0.9676, test_acc: 93.34%
Epoch 308 - loss: 0.9629, acc: 93.81% / test_loss: 0.9675, test_acc: 93.36%
Epoch 309 - loss: 0.9621, acc: 93.88% / test_loss: 0.9674, test_acc: 93.34%
Epoch 310 - loss: 0.9623, acc: 93.86% / test_loss: 0.9676, test_acc: 93.33%
Epoch 311 - loss: 0.9618, acc: 93.91% / test_loss: 0.9681, test_acc: 93.32%
Epoch 312 - loss: 0.9629, acc: 93.82% / test_loss: 0.9720, test_acc: 92.90%
Epoch 313 - loss: 0.9637, acc: 93.73% / test_loss: 0.9701, test_acc: 93.09%
Epoch 314 - loss: 0.9628, acc: 93.79% / test_loss: 0.9682, test_acc: 93.27%
Epoch 315 - loss: 0.9635, acc: 93.75% / test_loss: 0.9688, test_acc: 93.23%
Epoch 316 - loss: 0.9464, acc: 95.61% / test_loss: 0.9273, test_acc: 97.75%
Epoch 317 - loss: 0.9206, acc: 98.38% / test_loss: 0.9283, test_acc: 97.67%
Epoch 318 - loss: 0.9195, acc: 98.51% / test_loss: 0.9282, test_acc: 97.70%
Epoch 319 - loss: 0.9188, acc: 98.59% / test_loss: 0.9239, test_acc: 98.09%
Epoch 320 - loss: 0.9181, acc: 98.66% / test_loss: 0.9244, test_acc: 98.01%
Epoch 321 - loss: 0.9196, acc: 98.51% / test_loss: 0.9269, test_acc: 97.77%
Epoch 322 - loss: 0.9190, acc: 98.56% / test_loss: 0.9248, test_acc: 97.99%
Epoch 323 - loss: 0.9181, acc: 98.66% / test_loss: 0.9244, test_acc: 98.02%
Epoch 324 - loss: 0.9179, acc: 98.70% / test_loss: 0.9264, test_acc: 97.86%
Epoch 325 - loss: 0.9181, acc: 98.69% / test_loss: 0.9242, test_acc: 98.05%
Epoch 326 - loss: 0.9156, acc: 98.95% / test_loss: 0.9225, test_acc: 98.24%
Epoch 327 - loss: 0.9153, acc: 98.98% / test_loss: 0.9255, test_acc: 97.95%
Epoch 328 - loss: 0.9148, acc: 99.02% / test_loss: 0.9233, test_acc: 98.18%
Epoch 329 - loss: 0.9153, acc: 98.94% / test_loss: 0.9234, test_acc: 98.14%
Epoch 330 - loss: 0.9168, acc: 98.78% / test_loss: 0.9227, test_acc: 98.20%
Epoch 331 - loss: 0.9153, acc: 98.96% / test_loss: 0.9267, test_acc: 97.79%
Epoch 332 - loss: 0.9151, acc: 98.99% / test_loss: 0.9219, test_acc: 98.29%
Epoch 333 - loss: 0.9144, acc: 99.06% / test_loss: 0.9222, test_acc: 98.22%
Epoch 334 - loss: 0.9135, acc: 99.14% / test_loss: 0.9215, test_acc: 98.35%
Epoch 335 - loss: 0.9140, acc: 99.09% / test_loss: 0.9228, test_acc: 98.23%
Epoch 336 - loss: 0.9145, acc: 99.05% / test_loss: 0.9223, test_acc: 98.28%
Epoch 337 - loss: 0.9158, acc: 98.93% / test_loss: 0.9222, test_acc: 98.25%
Epoch 338 - loss: 0.9145, acc: 99.03% / test_loss: 0.9272, test_acc: 97.78%
Epoch 339 - loss: 0.9157, acc: 98.91% / test_loss: 0.9228, test_acc: 98.17%
Epoch 340 - loss: 0.9142, acc: 99.09% / test_loss: 0.9219, test_acc: 98.31%
Epoch 341 - loss: 0.9141, acc: 99.08% / test_loss: 0.9218, test_acc: 98.30%
Epoch 342 - loss: 0.9141, acc: 99.06% / test_loss: 0.9225, test_acc: 98.25%
Epoch 343 - loss: 0.9143, acc: 99.06% / test_loss: 0.9218, test_acc: 98.30%
Epoch 344 - loss: 0.9145, acc: 99.05% / test_loss: 0.9299, test_acc: 97.46%
Epoch 345 - loss: 0.9151, acc: 98.97% / test_loss: 0.9228, test_acc: 98.23%
Epoch 346 - loss: 0.9146, acc: 99.03% / test_loss: 0.9250, test_acc: 98.01%
Epoch 347 - loss: 0.9142, acc: 99.09% / test_loss: 0.9232, test_acc: 98.20%
Epoch 348 - loss: 0.9144, acc: 99.05% / test_loss: 0.9225, test_acc: 98.23%
Epoch 349 - loss: 0.9169, acc: 98.79% / test_loss: 0.9226, test_acc: 98.20%
Epoch 350 - loss: 0.9149, acc: 98.99% / test_loss: 0.9230, test_acc: 98.17%
Epoch 351 - loss: 0.9151, acc: 98.98% / test_loss: 0.9244, test_acc: 98.03%
Epoch 352 - loss: 0.9141, acc: 99.09% / test_loss: 0.9222, test_acc: 98.26%
Epoch 353 - loss: 0.9148, acc: 99.00% / test_loss: 0.9240, test_acc: 98.09%
Epoch 354 - loss: 0.9139, acc: 99.09% / test_loss: 0.9221, test_acc: 98.24%
Epoch 355 - loss: 0.9148, acc: 99.02% / test_loss: 0.9229, test_acc: 98.19%
Epoch 356 - loss: 0.9160, acc: 98.87% / test_loss: 0.9265, test_acc: 97.80%
Epoch 357 - loss: 0.9129, acc: 99.20% / test_loss: 0.9228, test_acc: 98.20%
Epoch 358 - loss: 0.9157, acc: 98.91% / test_loss: 0.9230, test_acc: 98.19%
Epoch 359 - loss: 0.9142, acc: 99.05% / test_loss: 0.9223, test_acc: 98.25%
Epoch 360 - loss: 0.9140, acc: 99.10% / test_loss: 0.9224, test_acc: 98.24%
Epoch 361 - loss: 0.9137, acc: 99.11% / test_loss: 0.9238, test_acc: 98.09%
Epoch 362 - loss: 0.9144, acc: 99.06% / test_loss: 0.9228, test_acc: 98.23%
Epoch 363 - loss: 0.9147, acc: 99.01% / test_loss: 0.9236, test_acc: 98.11%
Epoch 364 - loss: 0.9143, acc: 99.06% / test_loss: 0.9232, test_acc: 98.15%
Epoch 365 - loss: 0.9146, acc: 99.03% / test_loss: 0.9220, test_acc: 98.29%
Epoch 366 - loss: 0.9140, acc: 99.09% / test_loss: 0.9226, test_acc: 98.20%
Epoch 367 - loss: 0.9135, acc: 99.14% / test_loss: 0.9224, test_acc: 98.23%
Epoch 368 - loss: 0.9147, acc: 99.02% / test_loss: 0.9227, test_acc: 98.20%
Epoch 369 - loss: 0.9153, acc: 98.94% / test_loss: 0.9227, test_acc: 98.20%
Epoch 370 - loss: 0.9153, acc: 98.96% / test_loss: 0.9225, test_acc: 98.23%
Epoch 371 - loss: 0.9142, acc: 99.06% / test_loss: 0.9228, test_acc: 98.17%
Epoch 372 - loss: 0.9133, acc: 99.15% / test_loss: 0.9224, test_acc: 98.24%
Epoch 373 - loss: 0.9131, acc: 99.17% / test_loss: 0.9224, test_acc: 98.26%
Epoch 374 - loss: 0.9130, acc: 99.19% / test_loss: 0.9212, test_acc: 98.35%
Epoch 375 - loss: 0.9131, acc: 99.18% / test_loss: 0.9218, test_acc: 98.29%
Epoch 376 - loss: 0.9154, acc: 98.94% / test_loss: 0.9251, test_acc: 97.98%
Epoch 377 - loss: 0.9143, acc: 99.06% / test_loss: 0.9240, test_acc: 98.07%
Epoch 378 - loss: 0.9143, acc: 99.05% / test_loss: 0.9245, test_acc: 98.01%
Epoch 379 - loss: 0.9143, acc: 99.06% / test_loss: 0.9219, test_acc: 98.31%
Epoch 380 - loss: 0.9137, acc: 99.12% / test_loss: 0.9222, test_acc: 98.26%
Epoch 381 - loss: 0.9134, acc: 99.15% / test_loss: 0.9223, test_acc: 98.23%
Epoch 382 - loss: 0.9146, acc: 99.05% / test_loss: 0.9247, test_acc: 98.03%
Epoch 383 - loss: 0.9145, acc: 99.03% / test_loss: 0.9227, test_acc: 98.20%
Epoch 384 - loss: 0.9147, acc: 99.01% / test_loss: 0.9257, test_acc: 97.89%
Epoch 385 - loss: 0.9143, acc: 99.06% / test_loss: 0.9227, test_acc: 98.21%
Epoch 386 - loss: 0.9130, acc: 99.18% / test_loss: 0.9210, test_acc: 98.37%
Epoch 387 - loss: 0.9129, acc: 99.19% / test_loss: 0.9211, test_acc: 98.36%
Epoch 388 - loss: 0.9146, acc: 99.01% / test_loss: 0.9240, test_acc: 98.07%
Epoch 389 - loss: 0.9137, acc: 99.12% / test_loss: 0.9222, test_acc: 98.26%
Epoch 390 - loss: 0.9143, acc: 99.06% / test_loss: 0.9219, test_acc: 98.33%
Epoch 391 - loss: 0.9133, acc: 99.16% / test_loss: 0.9212, test_acc: 98.35%
Epoch 392 - loss: 0.9136, acc: 99.12% / test_loss: 0.9214, test_acc: 98.35%
Epoch 393 - loss: 0.9136, acc: 99.12% / test_loss: 0.9255, test_acc: 97.92%
Epoch 394 - loss: 0.9164, acc: 98.84% / test_loss: 0.9231, test_acc: 98.17%
Epoch 395 - loss: 0.9142, acc: 99.06% / test_loss: 0.9230, test_acc: 98.15%
Epoch 396 - loss: 0.9149, acc: 99.00% / test_loss: 0.9223, test_acc: 98.25%
Epoch 397 - loss: 0.9139, acc: 99.08% / test_loss: 0.9216, test_acc: 98.32%
Epoch 398 - loss: 0.9142, acc: 99.06% / test_loss: 0.9225, test_acc: 98.24%
Epoch 399 - loss: 0.9135, acc: 99.15% / test_loss: 0.9227, test_acc: 98.20%
Epoch 400 - loss: 0.9143, acc: 99.05% / test_loss: 0.9241, test_acc: 98.07%
Best test accuracy 98.37% in epoch 386.
----------------------------------------------------------------------------------------------------
Run 6
Epoch 1 - loss: 1.3549, acc: 55.08% / test_loss: 1.2234, test_acc: 68.43%
Epoch 2 - loss: 1.1673, acc: 74.80% / test_loss: 1.0819, test_acc: 84.23%
Epoch 3 - loss: 1.0700, acc: 84.26% / test_loss: 1.0454, test_acc: 86.61%
Epoch 4 - loss: 1.0498, acc: 85.96% / test_loss: 1.0345, test_acc: 87.38%
Epoch 5 - loss: 1.0409, acc: 86.70% / test_loss: 1.0311, test_acc: 87.45%
Epoch 6 - loss: 1.0371, acc: 86.92% / test_loss: 1.0354, test_acc: 87.22%
Epoch 7 - loss: 1.0322, acc: 87.39% / test_loss: 1.0206, test_acc: 88.57%
Epoch 8 - loss: 1.0302, acc: 87.53% / test_loss: 1.0215, test_acc: 88.35%
Epoch 9 - loss: 1.0267, acc: 87.87% / test_loss: 1.0254, test_acc: 88.21%
Epoch 10 - loss: 1.0262, acc: 87.85% / test_loss: 1.0164, test_acc: 88.86%
Epoch 11 - loss: 1.0233, acc: 88.16% / test_loss: 1.0173, test_acc: 88.79%
Epoch 12 - loss: 1.0201, acc: 88.48% / test_loss: 1.0159, test_acc: 88.83%
Epoch 13 - loss: 1.0203, acc: 88.41% / test_loss: 1.0117, test_acc: 89.29%
Epoch 14 - loss: 1.0198, acc: 88.43% / test_loss: 1.0096, test_acc: 89.38%
Epoch 15 - loss: 1.0170, acc: 88.73% / test_loss: 1.0086, test_acc: 89.52%
Epoch 16 - loss: 1.0174, acc: 88.67% / test_loss: 1.0095, test_acc: 89.43%
Epoch 17 - loss: 1.0148, acc: 88.84% / test_loss: 1.0095, test_acc: 89.41%
Epoch 18 - loss: 1.0141, acc: 88.95% / test_loss: 1.0125, test_acc: 89.14%
Epoch 19 - loss: 1.0132, acc: 89.01% / test_loss: 1.0063, test_acc: 89.79%
Epoch 20 - loss: 1.0112, acc: 89.27% / test_loss: 1.0090, test_acc: 89.59%
Epoch 21 - loss: 1.0106, acc: 89.31% / test_loss: 1.0116, test_acc: 89.27%
Epoch 22 - loss: 1.0122, acc: 89.16% / test_loss: 1.0056, test_acc: 89.74%
Epoch 23 - loss: 1.0090, acc: 89.40% / test_loss: 1.0049, test_acc: 89.85%
Epoch 24 - loss: 1.0083, acc: 89.54% / test_loss: 1.0036, test_acc: 90.01%
Epoch 25 - loss: 1.0077, acc: 89.59% / test_loss: 1.0048, test_acc: 89.78%
Epoch 26 - loss: 1.0067, acc: 89.65% / test_loss: 1.0023, test_acc: 90.02%
Epoch 27 - loss: 1.0075, acc: 89.54% / test_loss: 1.0023, test_acc: 90.09%
Epoch 28 - loss: 1.0073, acc: 89.59% / test_loss: 1.0041, test_acc: 89.92%
Epoch 29 - loss: 1.0066, acc: 89.67% / test_loss: 1.0014, test_acc: 90.15%
Epoch 30 - loss: 1.0049, acc: 89.84% / test_loss: 1.0023, test_acc: 90.02%
Epoch 31 - loss: 1.0062, acc: 89.71% / test_loss: 1.0014, test_acc: 90.15%
Epoch 32 - loss: 1.0053, acc: 89.76% / test_loss: 1.0024, test_acc: 90.07%
Epoch 33 - loss: 1.0067, acc: 89.64% / test_loss: 1.0028, test_acc: 90.10%
Epoch 34 - loss: 1.0048, acc: 89.84% / test_loss: 1.0002, test_acc: 90.24%
Epoch 35 - loss: 1.0060, acc: 89.72% / test_loss: 1.0020, test_acc: 90.16%
Epoch 36 - loss: 1.0044, acc: 89.83% / test_loss: 0.9994, test_acc: 90.36%
Epoch 37 - loss: 1.0047, acc: 89.82% / test_loss: 1.0075, test_acc: 89.60%
Epoch 38 - loss: 1.0054, acc: 89.75% / test_loss: 1.0020, test_acc: 90.12%
Epoch 39 - loss: 1.0033, acc: 89.97% / test_loss: 1.0025, test_acc: 90.04%
Epoch 40 - loss: 1.0026, acc: 90.03% / test_loss: 1.0005, test_acc: 90.20%
Epoch 41 - loss: 1.0044, acc: 89.86% / test_loss: 1.0002, test_acc: 90.22%
Epoch 42 - loss: 1.0029, acc: 89.98% / test_loss: 1.0015, test_acc: 90.12%
Epoch 43 - loss: 1.0016, acc: 90.12% / test_loss: 0.9982, test_acc: 90.40%
Epoch 44 - loss: 1.0023, acc: 90.00% / test_loss: 0.9990, test_acc: 90.36%
Epoch 45 - loss: 1.0041, acc: 89.82% / test_loss: 1.0007, test_acc: 90.32%
Epoch 46 - loss: 1.0022, acc: 90.04% / test_loss: 0.9992, test_acc: 90.37%
Epoch 47 - loss: 1.0022, acc: 90.03% / test_loss: 0.9972, test_acc: 90.48%
Epoch 48 - loss: 1.0004, acc: 90.20% / test_loss: 0.9978, test_acc: 90.43%
Epoch 49 - loss: 1.0015, acc: 90.10% / test_loss: 0.9989, test_acc: 90.30%
Epoch 50 - loss: 1.0018, acc: 90.01% / test_loss: 0.9978, test_acc: 90.47%
Epoch 51 - loss: 0.9997, acc: 90.27% / test_loss: 0.9968, test_acc: 90.52%
Epoch 52 - loss: 1.0003, acc: 90.21% / test_loss: 0.9993, test_acc: 90.28%
Epoch 53 - loss: 1.0017, acc: 90.08% / test_loss: 1.0043, test_acc: 89.88%
Epoch 54 - loss: 1.0014, acc: 90.10% / test_loss: 0.9972, test_acc: 90.49%
Epoch 55 - loss: 0.9996, acc: 90.25% / test_loss: 0.9971, test_acc: 90.52%
Epoch 56 - loss: 1.0001, acc: 90.25% / test_loss: 0.9990, test_acc: 90.37%
Epoch 57 - loss: 1.0000, acc: 90.22% / test_loss: 0.9995, test_acc: 90.34%
Epoch 58 - loss: 0.9995, acc: 90.28% / test_loss: 1.0024, test_acc: 90.06%
Epoch 59 - loss: 0.9992, acc: 90.28% / test_loss: 0.9974, test_acc: 90.52%
Epoch 60 - loss: 0.9979, acc: 90.43% / test_loss: 0.9969, test_acc: 90.57%
Epoch 61 - loss: 0.9978, acc: 90.43% / test_loss: 0.9959, test_acc: 90.62%
Epoch 62 - loss: 0.9972, acc: 90.49% / test_loss: 0.9965, test_acc: 90.65%
Epoch 63 - loss: 0.9980, acc: 90.40% / test_loss: 0.9957, test_acc: 90.67%
Epoch 64 - loss: 0.9986, acc: 90.36% / test_loss: 0.9959, test_acc: 90.62%
Epoch 65 - loss: 0.9973, acc: 90.47% / test_loss: 0.9952, test_acc: 90.63%
Epoch 66 - loss: 0.9964, acc: 90.53% / test_loss: 0.9935, test_acc: 90.86%
Epoch 67 - loss: 0.9968, acc: 90.55% / test_loss: 0.9939, test_acc: 90.80%
Epoch 68 - loss: 0.9960, acc: 90.61% / test_loss: 0.9969, test_acc: 90.51%
Epoch 69 - loss: 0.9956, acc: 90.63% / test_loss: 0.9955, test_acc: 90.65%
Epoch 70 - loss: 0.9957, acc: 90.65% / test_loss: 0.9933, test_acc: 90.86%
Epoch 71 - loss: 0.9950, acc: 90.65% / test_loss: 0.9965, test_acc: 90.66%
Epoch 72 - loss: 0.9970, acc: 90.52% / test_loss: 0.9940, test_acc: 90.79%
Epoch 73 - loss: 0.9947, acc: 90.74% / test_loss: 0.9956, test_acc: 90.69%
Epoch 74 - loss: 0.9958, acc: 90.65% / test_loss: 0.9931, test_acc: 90.89%
Epoch 75 - loss: 0.9931, acc: 90.86% / test_loss: 0.9935, test_acc: 90.81%
Epoch 76 - loss: 0.9948, acc: 90.71% / test_loss: 0.9940, test_acc: 90.83%
Epoch 77 - loss: 0.9948, acc: 90.71% / test_loss: 0.9926, test_acc: 90.88%
Epoch 78 - loss: 0.9921, acc: 90.96% / test_loss: 0.9941, test_acc: 90.78%
Epoch 79 - loss: 0.9912, acc: 91.12% / test_loss: 0.9898, test_acc: 91.21%
Epoch 80 - loss: 0.9878, acc: 91.39% / test_loss: 0.9868, test_acc: 91.51%
Epoch 81 - loss: 0.9863, acc: 91.51% / test_loss: 0.9831, test_acc: 91.94%
Epoch 82 - loss: 0.9831, acc: 91.87% / test_loss: 0.9833, test_acc: 91.85%
Epoch 83 - loss: 0.9831, acc: 91.88% / test_loss: 0.9819, test_acc: 91.98%
Epoch 84 - loss: 0.9806, acc: 92.11% / test_loss: 0.9860, test_acc: 91.57%
Epoch 85 - loss: 0.9801, acc: 92.13% / test_loss: 0.9794, test_acc: 92.20%
Epoch 86 - loss: 0.9779, acc: 92.34% / test_loss: 0.9796, test_acc: 92.19%
Epoch 87 - loss: 0.9780, acc: 92.34% / test_loss: 0.9842, test_acc: 91.79%
Epoch 88 - loss: 0.9771, acc: 92.45% / test_loss: 0.9804, test_acc: 92.10%
Epoch 89 - loss: 0.9775, acc: 92.43% / test_loss: 0.9902, test_acc: 91.19%
Epoch 90 - loss: 0.9765, acc: 92.53% / test_loss: 0.9805, test_acc: 92.07%
Epoch 91 - loss: 0.9749, acc: 92.68% / test_loss: 0.9790, test_acc: 92.19%
Epoch 92 - loss: 0.9749, acc: 92.66% / test_loss: 0.9776, test_acc: 92.39%
Epoch 93 - loss: 0.9748, acc: 92.71% / test_loss: 0.9825, test_acc: 91.91%
Epoch 94 - loss: 0.9751, acc: 92.65% / test_loss: 0.9777, test_acc: 92.44%
Epoch 95 - loss: 0.9729, acc: 92.86% / test_loss: 0.9769, test_acc: 92.43%
Epoch 96 - loss: 0.9735, acc: 92.78% / test_loss: 0.9785, test_acc: 92.29%
Epoch 97 - loss: 0.9760, acc: 92.57% / test_loss: 0.9776, test_acc: 92.42%
Epoch 98 - loss: 0.9734, acc: 92.82% / test_loss: 0.9776, test_acc: 92.40%
Epoch 99 - loss: 0.9745, acc: 92.73% / test_loss: 0.9778, test_acc: 92.33%
Epoch 100 - loss: 0.9735, acc: 92.83% / test_loss: 0.9761, test_acc: 92.55%
Epoch 101 - loss: 0.9748, acc: 92.68% / test_loss: 0.9805, test_acc: 92.13%
Epoch 102 - loss: 0.9740, acc: 92.74% / test_loss: 0.9773, test_acc: 92.42%
Epoch 103 - loss: 0.9739, acc: 92.78% / test_loss: 0.9780, test_acc: 92.33%
Epoch 104 - loss: 0.9732, acc: 92.87% / test_loss: 0.9770, test_acc: 92.42%
Epoch 105 - loss: 0.9722, acc: 92.91% / test_loss: 0.9752, test_acc: 92.62%
Epoch 106 - loss: 0.9718, acc: 92.93% / test_loss: 0.9754, test_acc: 92.59%
Epoch 107 - loss: 0.9732, acc: 92.89% / test_loss: 0.9778, test_acc: 92.46%
Epoch 108 - loss: 0.9721, acc: 92.91% / test_loss: 0.9759, test_acc: 92.49%
Epoch 109 - loss: 0.9717, acc: 92.99% / test_loss: 0.9770, test_acc: 92.44%
Epoch 110 - loss: 0.9735, acc: 92.82% / test_loss: 0.9766, test_acc: 92.46%
Epoch 111 - loss: 0.9715, acc: 92.99% / test_loss: 0.9750, test_acc: 92.67%
Epoch 112 - loss: 0.9732, acc: 92.86% / test_loss: 0.9755, test_acc: 92.64%
Epoch 113 - loss: 0.9725, acc: 92.89% / test_loss: 0.9744, test_acc: 92.72%
Epoch 114 - loss: 0.9704, acc: 93.10% / test_loss: 0.9733, test_acc: 92.81%
Epoch 115 - loss: 0.9708, acc: 93.08% / test_loss: 0.9748, test_acc: 92.65%
Epoch 116 - loss: 0.9690, acc: 93.25% / test_loss: 0.9734, test_acc: 92.72%
Epoch 117 - loss: 0.9695, acc: 93.19% / test_loss: 0.9730, test_acc: 92.79%
Epoch 118 - loss: 0.9708, acc: 93.06% / test_loss: 0.9722, test_acc: 92.93%
Epoch 119 - loss: 0.9711, acc: 93.02% / test_loss: 0.9815, test_acc: 92.07%
Epoch 120 - loss: 0.9706, acc: 93.08% / test_loss: 0.9738, test_acc: 92.77%
Epoch 121 - loss: 0.9690, acc: 93.27% / test_loss: 0.9726, test_acc: 92.85%
Epoch 122 - loss: 0.9688, acc: 93.24% / test_loss: 0.9736, test_acc: 92.77%
Epoch 123 - loss: 0.9685, acc: 93.29% / test_loss: 0.9726, test_acc: 92.80%
Epoch 124 - loss: 0.9703, acc: 93.11% / test_loss: 0.9731, test_acc: 92.80%
Epoch 125 - loss: 0.9686, acc: 93.30% / test_loss: 0.9727, test_acc: 92.88%
Epoch 126 - loss: 0.9692, acc: 93.21% / test_loss: 0.9718, test_acc: 92.95%
Epoch 127 - loss: 0.9677, acc: 93.34% / test_loss: 0.9776, test_acc: 92.32%
Epoch 128 - loss: 0.9682, acc: 93.31% / test_loss: 0.9723, test_acc: 92.88%
Epoch 129 - loss: 0.9697, acc: 93.19% / test_loss: 0.9743, test_acc: 92.69%
Epoch 130 - loss: 0.9687, acc: 93.27% / test_loss: 0.9714, test_acc: 92.97%
Epoch 131 - loss: 0.9677, acc: 93.34% / test_loss: 0.9717, test_acc: 92.97%
Epoch 132 - loss: 0.9680, acc: 93.36% / test_loss: 0.9728, test_acc: 92.87%
Epoch 133 - loss: 0.9679, acc: 93.34% / test_loss: 0.9725, test_acc: 92.84%
Epoch 134 - loss: 0.9685, acc: 93.32% / test_loss: 0.9742, test_acc: 92.70%
Epoch 135 - loss: 0.9672, acc: 93.44% / test_loss: 0.9727, test_acc: 92.87%
Epoch 136 - loss: 0.9677, acc: 93.39% / test_loss: 0.9726, test_acc: 92.91%
Epoch 137 - loss: 0.9672, acc: 93.42% / test_loss: 0.9720, test_acc: 92.90%
Epoch 138 - loss: 0.9667, acc: 93.45% / test_loss: 0.9746, test_acc: 92.68%
Epoch 139 - loss: 0.9670, acc: 93.41% / test_loss: 0.9720, test_acc: 92.92%
Epoch 140 - loss: 0.9680, acc: 93.32% / test_loss: 0.9723, test_acc: 92.92%
Epoch 141 - loss: 0.9669, acc: 93.42% / test_loss: 0.9723, test_acc: 92.87%
Epoch 142 - loss: 0.9671, acc: 93.40% / test_loss: 0.9702, test_acc: 93.09%
Epoch 143 - loss: 0.9674, acc: 93.39% / test_loss: 0.9717, test_acc: 92.98%
Epoch 144 - loss: 0.9666, acc: 93.48% / test_loss: 0.9710, test_acc: 93.01%
Epoch 145 - loss: 0.9657, acc: 93.55% / test_loss: 0.9711, test_acc: 93.01%
Epoch 146 - loss: 0.9658, acc: 93.54% / test_loss: 0.9729, test_acc: 92.80%
Epoch 147 - loss: 0.9662, acc: 93.53% / test_loss: 0.9720, test_acc: 92.91%
Epoch 148 - loss: 0.9672, acc: 93.40% / test_loss: 0.9710, test_acc: 93.02%
Epoch 149 - loss: 0.9672, acc: 93.43% / test_loss: 0.9732, test_acc: 92.79%
Epoch 150 - loss: 0.9661, acc: 93.51% / test_loss: 0.9702, test_acc: 93.08%
Epoch 151 - loss: 0.9663, acc: 93.49% / test_loss: 0.9708, test_acc: 93.05%
Epoch 152 - loss: 0.9662, acc: 93.49% / test_loss: 0.9703, test_acc: 93.06%
Epoch 153 - loss: 0.9665, acc: 93.48% / test_loss: 0.9705, test_acc: 93.07%
Epoch 154 - loss: 0.9657, acc: 93.54% / test_loss: 0.9710, test_acc: 92.99%
Epoch 155 - loss: 0.9657, acc: 93.56% / test_loss: 0.9734, test_acc: 92.74%
Epoch 156 - loss: 0.9669, acc: 93.46% / test_loss: 0.9702, test_acc: 93.11%
Epoch 157 - loss: 0.9661, acc: 93.51% / test_loss: 0.9709, test_acc: 93.01%
Epoch 158 - loss: 0.9652, acc: 93.59% / test_loss: 0.9699, test_acc: 93.10%
Epoch 159 - loss: 0.9663, acc: 93.50% / test_loss: 0.9702, test_acc: 93.11%
Epoch 160 - loss: 0.9659, acc: 93.52% / test_loss: 0.9706, test_acc: 93.05%
Epoch 161 - loss: 0.9658, acc: 93.51% / test_loss: 0.9698, test_acc: 93.14%
Epoch 162 - loss: 0.9649, acc: 93.61% / test_loss: 0.9761, test_acc: 92.59%
Epoch 163 - loss: 0.9673, acc: 93.42% / test_loss: 0.9702, test_acc: 93.08%
Epoch 164 - loss: 0.9660, acc: 93.50% / test_loss: 0.9701, test_acc: 93.07%
Epoch 165 - loss: 0.9653, acc: 93.56% / test_loss: 0.9707, test_acc: 93.05%
Epoch 166 - loss: 0.9662, acc: 93.48% / test_loss: 0.9705, test_acc: 93.08%
Epoch 167 - loss: 0.9659, acc: 93.52% / test_loss: 0.9698, test_acc: 93.15%
Epoch 168 - loss: 0.9645, acc: 93.66% / test_loss: 0.9701, test_acc: 93.08%
Epoch 169 - loss: 0.9647, acc: 93.64% / test_loss: 0.9709, test_acc: 93.01%
Epoch 170 - loss: 0.9644, acc: 93.67% / test_loss: 0.9702, test_acc: 93.05%
Epoch 171 - loss: 0.9651, acc: 93.58% / test_loss: 0.9697, test_acc: 93.14%
Epoch 172 - loss: 0.9657, acc: 93.53% / test_loss: 0.9696, test_acc: 93.12%
Epoch 173 - loss: 0.9655, acc: 93.56% / test_loss: 0.9731, test_acc: 92.81%
Epoch 174 - loss: 0.9666, acc: 93.47% / test_loss: 0.9704, test_acc: 93.03%
Epoch 175 - loss: 0.9653, acc: 93.60% / test_loss: 0.9715, test_acc: 92.98%
Epoch 176 - loss: 0.9658, acc: 93.51% / test_loss: 0.9693, test_acc: 93.16%
Epoch 177 - loss: 0.9650, acc: 93.61% / test_loss: 0.9721, test_acc: 92.90%
Epoch 178 - loss: 0.9645, acc: 93.67% / test_loss: 0.9704, test_acc: 93.08%
Epoch 179 - loss: 0.9645, acc: 93.67% / test_loss: 0.9698, test_acc: 93.14%
Epoch 180 - loss: 0.9652, acc: 93.59% / test_loss: 0.9712, test_acc: 93.02%
Epoch 181 - loss: 0.9659, acc: 93.55% / test_loss: 0.9692, test_acc: 93.20%
Epoch 182 - loss: 0.9650, acc: 93.60% / test_loss: 0.9684, test_acc: 93.26%
Epoch 183 - loss: 0.9650, acc: 93.61% / test_loss: 0.9715, test_acc: 92.97%
Epoch 184 - loss: 0.9642, acc: 93.65% / test_loss: 0.9690, test_acc: 93.21%
Epoch 185 - loss: 0.9639, acc: 93.70% / test_loss: 0.9688, test_acc: 93.24%
Epoch 186 - loss: 0.9643, acc: 93.68% / test_loss: 0.9699, test_acc: 93.14%
Epoch 187 - loss: 0.9663, acc: 93.46% / test_loss: 0.9697, test_acc: 93.13%
Epoch 188 - loss: 0.9649, acc: 93.64% / test_loss: 0.9698, test_acc: 93.11%
Epoch 189 - loss: 0.9647, acc: 93.63% / test_loss: 0.9691, test_acc: 93.21%
Epoch 190 - loss: 0.9640, acc: 93.73% / test_loss: 0.9681, test_acc: 93.31%
Epoch 191 - loss: 0.9643, acc: 93.65% / test_loss: 0.9686, test_acc: 93.24%
Epoch 192 - loss: 0.9651, acc: 93.57% / test_loss: 0.9706, test_acc: 93.06%
Epoch 193 - loss: 0.9660, acc: 93.50% / test_loss: 0.9756, test_acc: 92.53%
Epoch 194 - loss: 0.9646, acc: 93.66% / test_loss: 0.9692, test_acc: 93.19%
Epoch 195 - loss: 0.9650, acc: 93.61% / test_loss: 0.9687, test_acc: 93.23%
Epoch 196 - loss: 0.9639, acc: 93.73% / test_loss: 0.9727, test_acc: 92.89%
Epoch 197 - loss: 0.9626, acc: 93.83% / test_loss: 0.9679, test_acc: 93.33%
Epoch 198 - loss: 0.9632, acc: 93.76% / test_loss: 0.9684, test_acc: 93.27%
Epoch 199 - loss: 0.9637, acc: 93.73% / test_loss: 0.9702, test_acc: 93.10%
Epoch 200 - loss: 0.9642, acc: 93.67% / test_loss: 0.9695, test_acc: 93.14%
Epoch 201 - loss: 0.9641, acc: 93.70% / test_loss: 0.9691, test_acc: 93.17%
Epoch 202 - loss: 0.9645, acc: 93.67% / test_loss: 0.9756, test_acc: 92.50%
Epoch 203 - loss: 0.9642, acc: 93.67% / test_loss: 0.9721, test_acc: 92.87%
Epoch 204 - loss: 0.9661, acc: 93.46% / test_loss: 0.9682, test_acc: 93.27%
Epoch 205 - loss: 0.9651, acc: 93.60% / test_loss: 0.9695, test_acc: 93.17%
Epoch 206 - loss: 0.9639, acc: 93.69% / test_loss: 0.9708, test_acc: 92.99%
Epoch 207 - loss: 0.9640, acc: 93.67% / test_loss: 0.9682, test_acc: 93.29%
Epoch 208 - loss: 0.9635, acc: 93.75% / test_loss: 0.9674, test_acc: 93.36%
Epoch 209 - loss: 0.9632, acc: 93.76% / test_loss: 0.9678, test_acc: 93.35%
Epoch 210 - loss: 0.9637, acc: 93.74% / test_loss: 0.9718, test_acc: 92.90%
Epoch 211 - loss: 0.9643, acc: 93.67% / test_loss: 0.9691, test_acc: 93.19%
Epoch 212 - loss: 0.9626, acc: 93.82% / test_loss: 0.9735, test_acc: 92.80%
Epoch 213 - loss: 0.9638, acc: 93.73% / test_loss: 0.9679, test_acc: 93.32%
Epoch 214 - loss: 0.9630, acc: 93.80% / test_loss: 0.9712, test_acc: 92.96%
Epoch 215 - loss: 0.9629, acc: 93.83% / test_loss: 0.9692, test_acc: 93.17%
Epoch 216 - loss: 0.9627, acc: 93.82% / test_loss: 0.9681, test_acc: 93.30%
Epoch 217 - loss: 0.9636, acc: 93.76% / test_loss: 0.9687, test_acc: 93.23%
Epoch 218 - loss: 0.9639, acc: 93.70% / test_loss: 0.9701, test_acc: 93.07%
Epoch 219 - loss: 0.9633, acc: 93.76% / test_loss: 0.9695, test_acc: 93.14%
Epoch 220 - loss: 0.9626, acc: 93.84% / test_loss: 0.9712, test_acc: 92.97%
Epoch 221 - loss: 0.9622, acc: 93.86% / test_loss: 0.9693, test_acc: 93.15%
Epoch 222 - loss: 0.9633, acc: 93.75% / test_loss: 0.9689, test_acc: 93.21%
Epoch 223 - loss: 0.9640, acc: 93.73% / test_loss: 0.9680, test_acc: 93.31%
Epoch 224 - loss: 0.9624, acc: 93.83% / test_loss: 0.9681, test_acc: 93.27%
Epoch 225 - loss: 0.9631, acc: 93.78% / test_loss: 0.9692, test_acc: 93.24%
Epoch 226 - loss: 0.9642, acc: 93.66% / test_loss: 0.9689, test_acc: 93.20%
Epoch 227 - loss: 0.9645, acc: 93.67% / test_loss: 0.9707, test_acc: 93.02%
Epoch 228 - loss: 0.9638, acc: 93.74% / test_loss: 0.9704, test_acc: 93.05%
Epoch 229 - loss: 0.9634, acc: 93.74% / test_loss: 0.9673, test_acc: 93.39%
Epoch 230 - loss: 0.9634, acc: 93.74% / test_loss: 0.9700, test_acc: 93.08%
Epoch 231 - loss: 0.9637, acc: 93.73% / test_loss: 0.9675, test_acc: 93.35%
Epoch 232 - loss: 0.9629, acc: 93.79% / test_loss: 0.9694, test_acc: 93.16%
Epoch 233 - loss: 0.9631, acc: 93.77% / test_loss: 0.9713, test_acc: 92.93%
Epoch 234 - loss: 0.9625, acc: 93.84% / test_loss: 0.9676, test_acc: 93.32%
Epoch 235 - loss: 0.9618, acc: 93.91% / test_loss: 0.9671, test_acc: 93.38%
Epoch 236 - loss: 0.9621, acc: 93.87% / test_loss: 0.9688, test_acc: 93.20%
Epoch 237 - loss: 0.9635, acc: 93.76% / test_loss: 0.9689, test_acc: 93.20%
Epoch 238 - loss: 0.9623, acc: 93.87% / test_loss: 0.9711, test_acc: 92.97%
Epoch 239 - loss: 0.9654, acc: 93.54% / test_loss: 0.9707, test_acc: 93.03%
Epoch 240 - loss: 0.9649, acc: 93.61% / test_loss: 0.9693, test_acc: 93.18%
Epoch 241 - loss: 0.9649, acc: 93.63% / test_loss: 0.9701, test_acc: 93.11%
Epoch 242 - loss: 0.9624, acc: 93.87% / test_loss: 0.9683, test_acc: 93.26%
Epoch 243 - loss: 0.9623, acc: 93.86% / test_loss: 0.9676, test_acc: 93.33%
Epoch 244 - loss: 0.9621, acc: 93.87% / test_loss: 0.9678, test_acc: 93.36%
Epoch 245 - loss: 0.9648, acc: 93.64% / test_loss: 0.9689, test_acc: 93.24%
Epoch 246 - loss: 0.9623, acc: 93.84% / test_loss: 0.9677, test_acc: 93.34%
Epoch 247 - loss: 0.9620, acc: 93.88% / test_loss: 0.9688, test_acc: 93.24%
Epoch 248 - loss: 0.9624, acc: 93.84% / test_loss: 0.9682, test_acc: 93.25%
Epoch 249 - loss: 0.9621, acc: 93.88% / test_loss: 0.9673, test_acc: 93.36%
Epoch 250 - loss: 0.9626, acc: 93.82% / test_loss: 0.9708, test_acc: 93.08%
Epoch 251 - loss: 0.9644, acc: 93.66% / test_loss: 0.9707, test_acc: 93.01%
Epoch 252 - loss: 0.9631, acc: 93.77% / test_loss: 0.9678, test_acc: 93.33%
Epoch 253 - loss: 0.9619, acc: 93.90% / test_loss: 0.9674, test_acc: 93.33%
Epoch 254 - loss: 0.9631, acc: 93.76% / test_loss: 0.9676, test_acc: 93.32%
Epoch 255 - loss: 0.9634, acc: 93.75% / test_loss: 0.9678, test_acc: 93.30%
Epoch 256 - loss: 0.9645, acc: 93.63% / test_loss: 0.9695, test_acc: 93.16%
Epoch 257 - loss: 0.9631, acc: 93.79% / test_loss: 0.9678, test_acc: 93.30%
Epoch 258 - loss: 0.9619, acc: 93.89% / test_loss: 0.9690, test_acc: 93.20%
Epoch 259 - loss: 0.9623, acc: 93.87% / test_loss: 0.9676, test_acc: 93.31%
Epoch 260 - loss: 0.9622, acc: 93.88% / test_loss: 0.9646, test_acc: 93.66%
Epoch 261 - loss: 0.9597, acc: 94.13% / test_loss: 0.9644, test_acc: 93.70%
Epoch 262 - loss: 0.9610, acc: 94.01% / test_loss: 0.9659, test_acc: 93.51%
Epoch 263 - loss: 0.9582, acc: 94.32% / test_loss: 0.9645, test_acc: 93.65%
Epoch 264 - loss: 0.9272, acc: 97.69% / test_loss: 0.9245, test_acc: 98.03%
Epoch 265 - loss: 0.9168, acc: 98.83% / test_loss: 0.9238, test_acc: 98.12%
Epoch 266 - loss: 0.9145, acc: 99.09% / test_loss: 0.9220, test_acc: 98.27%
Epoch 267 - loss: 0.9144, acc: 99.06% / test_loss: 0.9231, test_acc: 98.17%
Epoch 268 - loss: 0.9137, acc: 99.12% / test_loss: 0.9233, test_acc: 98.16%
Epoch 269 - loss: 0.9156, acc: 98.93% / test_loss: 0.9298, test_acc: 97.47%
Epoch 270 - loss: 0.9162, acc: 98.88% / test_loss: 0.9232, test_acc: 98.17%
Epoch 271 - loss: 0.9160, acc: 98.88% / test_loss: 0.9286, test_acc: 97.58%
Epoch 272 - loss: 0.9159, acc: 98.91% / test_loss: 0.9232, test_acc: 98.15%
Epoch 273 - loss: 0.9147, acc: 99.01% / test_loss: 0.9218, test_acc: 98.31%
Epoch 274 - loss: 0.9158, acc: 98.91% / test_loss: 0.9242, test_acc: 98.07%
Epoch 275 - loss: 0.9144, acc: 99.05% / test_loss: 0.9227, test_acc: 98.20%
Epoch 276 - loss: 0.9155, acc: 98.94% / test_loss: 0.9230, test_acc: 98.23%
Epoch 277 - loss: 0.9150, acc: 99.00% / test_loss: 0.9229, test_acc: 98.20%
Epoch 278 - loss: 0.9145, acc: 99.03% / test_loss: 0.9228, test_acc: 98.22%
Epoch 279 - loss: 0.9146, acc: 99.03% / test_loss: 0.9246, test_acc: 98.01%
Epoch 280 - loss: 0.9169, acc: 98.80% / test_loss: 0.9258, test_acc: 97.89%
Epoch 281 - loss: 0.9151, acc: 98.98% / test_loss: 0.9262, test_acc: 97.83%
Epoch 282 - loss: 0.9150, acc: 99.01% / test_loss: 0.9222, test_acc: 98.28%
Epoch 283 - loss: 0.9143, acc: 99.06% / test_loss: 0.9220, test_acc: 98.26%
Epoch 284 - loss: 0.9137, acc: 99.12% / test_loss: 0.9221, test_acc: 98.28%
Epoch 285 - loss: 0.9149, acc: 99.00% / test_loss: 0.9241, test_acc: 98.08%
Epoch 286 - loss: 0.9157, acc: 98.92% / test_loss: 0.9231, test_acc: 98.14%
Epoch 287 - loss: 0.9153, acc: 98.94% / test_loss: 0.9230, test_acc: 98.18%
Epoch 288 - loss: 0.9154, acc: 98.94% / test_loss: 0.9239, test_acc: 98.10%
Epoch 289 - loss: 0.9148, acc: 99.01% / test_loss: 0.9233, test_acc: 98.14%
Epoch 290 - loss: 0.9141, acc: 99.08% / test_loss: 0.9235, test_acc: 98.14%
Epoch 291 - loss: 0.9141, acc: 99.08% / test_loss: 0.9235, test_acc: 98.11%
Epoch 292 - loss: 0.9145, acc: 99.04% / test_loss: 0.9224, test_acc: 98.23%
Epoch 293 - loss: 0.9150, acc: 98.99% / test_loss: 0.9226, test_acc: 98.23%
Epoch 294 - loss: 0.9157, acc: 98.89% / test_loss: 0.9229, test_acc: 98.17%
Epoch 295 - loss: 0.9141, acc: 99.09% / test_loss: 0.9234, test_acc: 98.11%
Epoch 296 - loss: 0.9141, acc: 99.08% / test_loss: 0.9221, test_acc: 98.24%
Epoch 297 - loss: 0.9144, acc: 99.06% / test_loss: 0.9237, test_acc: 98.10%
Epoch 298 - loss: 0.9148, acc: 99.01% / test_loss: 0.9224, test_acc: 98.24%
Epoch 299 - loss: 0.9158, acc: 98.91% / test_loss: 0.9221, test_acc: 98.27%
Epoch 300 - loss: 0.9147, acc: 99.02% / test_loss: 0.9232, test_acc: 98.18%
Epoch 301 - loss: 0.9139, acc: 99.09% / test_loss: 0.9219, test_acc: 98.29%
Epoch 302 - loss: 0.9134, acc: 99.15% / test_loss: 0.9222, test_acc: 98.27%
Epoch 303 - loss: 0.9141, acc: 99.08% / test_loss: 0.9222, test_acc: 98.27%
Epoch 304 - loss: 0.9155, acc: 98.91% / test_loss: 0.9248, test_acc: 98.00%
Epoch 305 - loss: 0.9156, acc: 98.93% / test_loss: 0.9247, test_acc: 97.99%
Epoch 306 - loss: 0.9137, acc: 99.11% / test_loss: 0.9238, test_acc: 98.11%
Epoch 307 - loss: 0.9151, acc: 98.96% / test_loss: 0.9234, test_acc: 98.15%
Epoch 308 - loss: 0.9145, acc: 99.05% / test_loss: 0.9243, test_acc: 98.06%
Epoch 309 - loss: 0.9150, acc: 99.00% / test_loss: 0.9234, test_acc: 98.11%
Epoch 310 - loss: 0.9134, acc: 99.14% / test_loss: 0.9232, test_acc: 98.17%
Epoch 311 - loss: 0.9140, acc: 99.08% / test_loss: 0.9251, test_acc: 97.94%
Epoch 312 - loss: 0.9150, acc: 99.00% / test_loss: 0.9252, test_acc: 97.95%
Epoch 313 - loss: 0.9141, acc: 99.08% / test_loss: 0.9246, test_acc: 98.03%
Epoch 314 - loss: 0.9140, acc: 99.09% / test_loss: 0.9241, test_acc: 98.05%
Epoch 315 - loss: 0.9134, acc: 99.14% / test_loss: 0.9230, test_acc: 98.18%
Epoch 316 - loss: 0.9141, acc: 99.09% / test_loss: 0.9225, test_acc: 98.21%
Epoch 317 - loss: 0.9140, acc: 99.08% / test_loss: 0.9254, test_acc: 97.95%
Epoch 318 - loss: 0.9152, acc: 98.96% / test_loss: 0.9233, test_acc: 98.14%
Epoch 319 - loss: 0.9146, acc: 99.02% / test_loss: 0.9239, test_acc: 98.10%
Epoch 320 - loss: 0.9149, acc: 99.00% / test_loss: 0.9235, test_acc: 98.18%
Epoch 321 - loss: 0.9149, acc: 99.02% / test_loss: 0.9233, test_acc: 98.16%
Epoch 322 - loss: 0.9137, acc: 99.10% / test_loss: 0.9219, test_acc: 98.29%
Epoch 323 - loss: 0.9140, acc: 99.07% / test_loss: 0.9228, test_acc: 98.20%
Epoch 324 - loss: 0.9142, acc: 99.07% / test_loss: 0.9236, test_acc: 98.14%
Epoch 325 - loss: 0.9140, acc: 99.09% / test_loss: 0.9221, test_acc: 98.29%
Epoch 326 - loss: 0.9150, acc: 98.97% / test_loss: 0.9250, test_acc: 98.00%
Epoch 327 - loss: 0.9139, acc: 99.11% / test_loss: 0.9223, test_acc: 98.25%
Epoch 328 - loss: 0.9134, acc: 99.14% / test_loss: 0.9220, test_acc: 98.27%
Epoch 329 - loss: 0.9136, acc: 99.14% / test_loss: 0.9234, test_acc: 98.16%
Epoch 330 - loss: 0.9145, acc: 99.05% / test_loss: 0.9228, test_acc: 98.19%
Epoch 331 - loss: 0.9136, acc: 99.14% / test_loss: 0.9223, test_acc: 98.26%
Epoch 332 - loss: 0.9140, acc: 99.09% / test_loss: 0.9243, test_acc: 98.03%
Epoch 333 - loss: 0.9148, acc: 99.03% / test_loss: 0.9235, test_acc: 98.10%
Epoch 334 - loss: 0.9138, acc: 99.09% / test_loss: 0.9235, test_acc: 98.14%
Epoch 335 - loss: 0.9144, acc: 99.03% / test_loss: 0.9231, test_acc: 98.17%
Epoch 336 - loss: 0.9140, acc: 99.09% / test_loss: 0.9226, test_acc: 98.23%
Epoch 337 - loss: 0.9142, acc: 99.07% / test_loss: 0.9235, test_acc: 98.14%
Epoch 338 - loss: 0.9146, acc: 99.03% / test_loss: 0.9233, test_acc: 98.16%
Epoch 339 - loss: 0.9143, acc: 99.06% / test_loss: 0.9283, test_acc: 97.65%
Epoch 340 - loss: 0.9159, acc: 98.88% / test_loss: 0.9232, test_acc: 98.14%
Epoch 341 - loss: 0.9145, acc: 99.03% / test_loss: 0.9238, test_acc: 98.09%
Epoch 342 - loss: 0.9148, acc: 99.00% / test_loss: 0.9240, test_acc: 98.04%
Epoch 343 - loss: 0.9140, acc: 99.08% / test_loss: 0.9221, test_acc: 98.25%
Epoch 344 - loss: 0.9132, acc: 99.16% / test_loss: 0.9223, test_acc: 98.26%
Epoch 345 - loss: 0.9171, acc: 98.75% / test_loss: 0.9236, test_acc: 98.12%
Epoch 346 - loss: 0.9143, acc: 99.05% / test_loss: 0.9251, test_acc: 97.98%
Epoch 347 - loss: 0.9154, acc: 98.96% / test_loss: 0.9243, test_acc: 98.01%
Epoch 348 - loss: 0.9141, acc: 99.08% / test_loss: 0.9226, test_acc: 98.20%
Epoch 349 - loss: 0.9146, acc: 99.02% / test_loss: 0.9225, test_acc: 98.22%
Epoch 350 - loss: 0.9149, acc: 98.99% / test_loss: 0.9223, test_acc: 98.29%
Epoch 351 - loss: 0.9137, acc: 99.10% / test_loss: 0.9232, test_acc: 98.14%
Epoch 352 - loss: 0.9140, acc: 99.09% / test_loss: 0.9233, test_acc: 98.14%
Epoch 353 - loss: 0.9131, acc: 99.18% / test_loss: 0.9230, test_acc: 98.15%
Epoch 354 - loss: 0.9132, acc: 99.16% / test_loss: 0.9224, test_acc: 98.26%
Epoch 355 - loss: 0.9147, acc: 99.03% / test_loss: 0.9245, test_acc: 97.99%
Epoch 356 - loss: 0.9157, acc: 98.91% / test_loss: 0.9228, test_acc: 98.20%
Epoch 357 - loss: 0.9138, acc: 99.10% / test_loss: 0.9239, test_acc: 98.07%
Epoch 358 - loss: 0.9148, acc: 99.00% / test_loss: 0.9262, test_acc: 97.83%
Epoch 359 - loss: 0.9143, acc: 99.06% / test_loss: 0.9229, test_acc: 98.20%
Epoch 360 - loss: 0.9143, acc: 99.06% / test_loss: 0.9233, test_acc: 98.14%
Epoch 361 - loss: 0.9136, acc: 99.12% / test_loss: 0.9222, test_acc: 98.27%
Epoch 362 - loss: 0.9132, acc: 99.16% / test_loss: 0.9222, test_acc: 98.24%
Epoch 363 - loss: 0.9131, acc: 99.18% / test_loss: 0.9221, test_acc: 98.27%
Epoch 364 - loss: 0.9131, acc: 99.17% / test_loss: 0.9226, test_acc: 98.22%
Epoch 365 - loss: 0.9143, acc: 99.06% / test_loss: 0.9239, test_acc: 98.10%
Epoch 366 - loss: 0.9158, acc: 98.91% / test_loss: 0.9248, test_acc: 98.03%
Epoch 367 - loss: 0.9141, acc: 99.07% / test_loss: 0.9238, test_acc: 98.10%
Epoch 368 - loss: 0.9142, acc: 99.08% / test_loss: 0.9232, test_acc: 98.17%
Epoch 369 - loss: 0.9137, acc: 99.11% / test_loss: 0.9233, test_acc: 98.15%
Epoch 370 - loss: 0.9135, acc: 99.13% / test_loss: 0.9246, test_acc: 97.98%
Epoch 371 - loss: 0.9145, acc: 99.02% / test_loss: 0.9264, test_acc: 97.80%
Epoch 372 - loss: 0.9142, acc: 99.09% / test_loss: 0.9231, test_acc: 98.15%
Epoch 373 - loss: 0.9133, acc: 99.15% / test_loss: 0.9227, test_acc: 98.21%
Epoch 374 - loss: 0.9140, acc: 99.10% / test_loss: 0.9242, test_acc: 98.09%
Epoch 375 - loss: 0.9139, acc: 99.12% / test_loss: 0.9236, test_acc: 98.10%
Epoch 376 - loss: 0.9133, acc: 99.15% / test_loss: 0.9236, test_acc: 98.12%
Epoch 377 - loss: 0.9139, acc: 99.09% / test_loss: 0.9223, test_acc: 98.24%
Epoch 378 - loss: 0.9130, acc: 99.18% / test_loss: 0.9219, test_acc: 98.29%
Epoch 379 - loss: 0.9130, acc: 99.19% / test_loss: 0.9240, test_acc: 98.07%
Epoch 380 - loss: 0.9153, acc: 98.94% / test_loss: 0.9270, test_acc: 97.76%
Epoch 381 - loss: 0.9145, acc: 99.03% / test_loss: 0.9230, test_acc: 98.17%
Epoch 382 - loss: 0.9140, acc: 99.08% / test_loss: 0.9229, test_acc: 98.18%
Epoch 383 - loss: 0.9138, acc: 99.10% / test_loss: 0.9235, test_acc: 98.12%
Epoch 384 - loss: 0.9146, acc: 99.00% / test_loss: 0.9251, test_acc: 97.98%
Epoch 385 - loss: 0.9149, acc: 99.00% / test_loss: 0.9232, test_acc: 98.19%
Epoch 386 - loss: 0.9145, acc: 99.05% / test_loss: 0.9239, test_acc: 98.05%
Epoch 387 - loss: 0.9145, acc: 99.03% / test_loss: 0.9220, test_acc: 98.29%
Epoch 388 - loss: 0.9140, acc: 99.09% / test_loss: 0.9223, test_acc: 98.24%
Epoch 389 - loss: 0.9150, acc: 98.98% / test_loss: 0.9235, test_acc: 98.13%
Epoch 390 - loss: 0.9133, acc: 99.16% / test_loss: 0.9219, test_acc: 98.26%
Epoch 391 - loss: 0.9169, acc: 98.80% / test_loss: 0.9233, test_acc: 98.12%
Epoch 392 - loss: 0.9142, acc: 99.06% / test_loss: 0.9229, test_acc: 98.17%
Epoch 393 - loss: 0.9140, acc: 99.07% / test_loss: 0.9230, test_acc: 98.17%
Epoch 394 - loss: 0.9150, acc: 98.97% / test_loss: 0.9277, test_acc: 97.70%
Epoch 395 - loss: 0.9139, acc: 99.09% / test_loss: 0.9228, test_acc: 98.20%
Epoch 396 - loss: 0.9136, acc: 99.12% / test_loss: 0.9237, test_acc: 98.12%
Epoch 397 - loss: 0.9147, acc: 99.00% / test_loss: 0.9221, test_acc: 98.29%
Epoch 398 - loss: 0.9144, acc: 99.04% / test_loss: 0.9218, test_acc: 98.30%
Epoch 399 - loss: 0.9131, acc: 99.16% / test_loss: 0.9213, test_acc: 98.34%
Epoch 400 - loss: 0.9136, acc: 99.12% / test_loss: 0.9213, test_acc: 98.37%
Best test accuracy 98.37% in epoch 400.
----------------------------------------------------------------------------------------------------
Run 7
Epoch 1 - loss: 1.3347, acc: 57.78% / test_loss: 1.2064, test_acc: 71.40%
Epoch 2 - loss: 1.1408, acc: 77.58% / test_loss: 1.0692, test_acc: 84.88%
Epoch 3 - loss: 1.0620, acc: 85.03% / test_loss: 1.0402, test_acc: 87.04%
Epoch 4 - loss: 1.0488, acc: 86.02% / test_loss: 1.0357, test_acc: 87.26%
Epoch 5 - loss: 1.0401, acc: 86.75% / test_loss: 1.0257, test_acc: 88.06%
Epoch 6 - loss: 1.0343, acc: 87.23% / test_loss: 1.0248, test_acc: 88.04%
Epoch 7 - loss: 1.0306, acc: 87.49% / test_loss: 1.0203, test_acc: 88.47%
Epoch 8 - loss: 1.0274, acc: 87.78% / test_loss: 1.0186, test_acc: 88.73%
Epoch 9 - loss: 1.0257, acc: 87.98% / test_loss: 1.0171, test_acc: 88.74%
Epoch 10 - loss: 1.0249, acc: 88.00% / test_loss: 1.0170, test_acc: 88.76%
Epoch 11 - loss: 1.0213, acc: 88.37% / test_loss: 1.0182, test_acc: 88.69%
Epoch 12 - loss: 1.0216, acc: 88.24% / test_loss: 1.0140, test_acc: 89.17%
Epoch 13 - loss: 1.0200, acc: 88.44% / test_loss: 1.0121, test_acc: 89.24%
Epoch 14 - loss: 1.0178, acc: 88.60% / test_loss: 1.0122, test_acc: 89.23%
Epoch 15 - loss: 1.0171, acc: 88.67% / test_loss: 1.0099, test_acc: 89.46%
Epoch 16 - loss: 1.0143, acc: 88.95% / test_loss: 1.0071, test_acc: 89.68%
Epoch 17 - loss: 1.0151, acc: 88.94% / test_loss: 1.0116, test_acc: 89.41%
Epoch 18 - loss: 1.0155, acc: 88.82% / test_loss: 1.0091, test_acc: 89.61%
Epoch 19 - loss: 1.0134, acc: 88.98% / test_loss: 1.0105, test_acc: 89.35%
Epoch 20 - loss: 1.0121, acc: 89.14% / test_loss: 1.0065, test_acc: 89.63%
Epoch 21 - loss: 1.0127, acc: 89.07% / test_loss: 1.0065, test_acc: 89.69%
Epoch 22 - loss: 1.0112, acc: 89.24% / test_loss: 1.0042, test_acc: 89.85%
Epoch 23 - loss: 1.0110, acc: 89.29% / test_loss: 1.0070, test_acc: 89.77%
Epoch 24 - loss: 1.0096, acc: 89.38% / test_loss: 1.0039, test_acc: 89.94%
Epoch 25 - loss: 1.0089, acc: 89.45% / test_loss: 1.0041, test_acc: 89.94%
Epoch 26 - loss: 1.0077, acc: 89.57% / test_loss: 1.0021, test_acc: 90.15%
Epoch 27 - loss: 1.0083, acc: 89.50% / test_loss: 1.0043, test_acc: 89.88%
Epoch 28 - loss: 1.0077, acc: 89.52% / test_loss: 1.0083, test_acc: 89.42%
Epoch 29 - loss: 1.0080, acc: 89.52% / test_loss: 1.0009, test_acc: 90.15%
Epoch 30 - loss: 1.0054, acc: 89.75% / test_loss: 1.0040, test_acc: 89.89%
Epoch 31 - loss: 1.0068, acc: 89.66% / test_loss: 1.0037, test_acc: 90.07%
Epoch 32 - loss: 1.0070, acc: 89.62% / test_loss: 1.0047, test_acc: 89.98%
Epoch 33 - loss: 1.0057, acc: 89.71% / test_loss: 1.0039, test_acc: 89.95%
Epoch 34 - loss: 1.0062, acc: 89.68% / test_loss: 1.0030, test_acc: 90.03%
Epoch 35 - loss: 1.0055, acc: 89.76% / test_loss: 1.0009, test_acc: 90.19%
Epoch 36 - loss: 1.0054, acc: 89.76% / test_loss: 1.0015, test_acc: 90.18%
Epoch 37 - loss: 1.0038, acc: 89.88% / test_loss: 1.0035, test_acc: 89.88%
Epoch 38 - loss: 1.0025, acc: 90.01% / test_loss: 1.0001, test_acc: 90.26%
Epoch 39 - loss: 1.0024, acc: 90.00% / test_loss: 1.0039, test_acc: 89.91%
Epoch 40 - loss: 1.0031, acc: 89.96% / test_loss: 0.9985, test_acc: 90.36%
Epoch 41 - loss: 1.0027, acc: 90.00% / test_loss: 0.9986, test_acc: 90.37%
Epoch 42 - loss: 1.0016, acc: 90.12% / test_loss: 1.0008, test_acc: 90.22%
Epoch 43 - loss: 1.0007, acc: 90.15% / test_loss: 1.0005, test_acc: 90.23%
Epoch 44 - loss: 1.0012, acc: 90.12% / test_loss: 0.9974, test_acc: 90.46%
Epoch 45 - loss: 1.0004, acc: 90.18% / test_loss: 0.9975, test_acc: 90.46%
Epoch 46 - loss: 1.0011, acc: 90.12% / test_loss: 0.9981, test_acc: 90.47%
Epoch 47 - loss: 1.0012, acc: 90.16% / test_loss: 1.0011, test_acc: 90.23%
Epoch 48 - loss: 1.0002, acc: 90.22% / test_loss: 0.9966, test_acc: 90.58%
Epoch 49 - loss: 0.9999, acc: 90.24% / test_loss: 0.9989, test_acc: 90.40%
Epoch 50 - loss: 1.0005, acc: 90.16% / test_loss: 0.9985, test_acc: 90.35%
Epoch 51 - loss: 1.0001, acc: 90.22% / test_loss: 0.9995, test_acc: 90.31%
Epoch 52 - loss: 1.0029, acc: 89.97% / test_loss: 0.9976, test_acc: 90.46%
Epoch 53 - loss: 0.9988, acc: 90.36% / test_loss: 0.9986, test_acc: 90.31%
Epoch 54 - loss: 0.9981, acc: 90.38% / test_loss: 0.9967, test_acc: 90.58%
Epoch 55 - loss: 0.9973, acc: 90.49% / test_loss: 0.9968, test_acc: 90.56%
Epoch 56 - loss: 0.9978, acc: 90.44% / test_loss: 0.9967, test_acc: 90.58%
Epoch 57 - loss: 0.9964, acc: 90.56% / test_loss: 0.9944, test_acc: 90.74%
Epoch 58 - loss: 0.9968, acc: 90.51% / test_loss: 0.9965, test_acc: 90.57%
Epoch 59 - loss: 0.9972, acc: 90.51% / test_loss: 0.9969, test_acc: 90.56%
Epoch 60 - loss: 0.9966, acc: 90.55% / test_loss: 0.9937, test_acc: 90.82%
Epoch 61 - loss: 0.9960, acc: 90.62% / test_loss: 0.9946, test_acc: 90.76%
Epoch 62 - loss: 0.9929, acc: 90.86% / test_loss: 0.9912, test_acc: 91.17%
Epoch 63 - loss: 0.9925, acc: 91.01% / test_loss: 1.0003, test_acc: 90.13%
Epoch 64 - loss: 0.9870, acc: 91.49% / test_loss: 0.9884, test_acc: 91.34%
Epoch 65 - loss: 0.9853, acc: 91.70% / test_loss: 0.9934, test_acc: 90.94%
Epoch 66 - loss: 0.9837, acc: 91.87% / test_loss: 0.9869, test_acc: 91.61%
Epoch 67 - loss: 0.9817, acc: 92.01% / test_loss: 0.9841, test_acc: 91.78%
Epoch 68 - loss: 0.9806, acc: 92.13% / test_loss: 0.9829, test_acc: 91.89%
Epoch 69 - loss: 0.9792, acc: 92.28% / test_loss: 0.9805, test_acc: 92.14%
Epoch 70 - loss: 0.9796, acc: 92.25% / test_loss: 0.9805, test_acc: 92.13%
Epoch 71 - loss: 0.9777, acc: 92.42% / test_loss: 0.9810, test_acc: 92.09%
Epoch 72 - loss: 0.9794, acc: 92.21% / test_loss: 0.9820, test_acc: 92.07%
Epoch 73 - loss: 0.9778, acc: 92.39% / test_loss: 0.9804, test_acc: 92.12%
Epoch 74 - loss: 0.9762, acc: 92.57% / test_loss: 0.9815, test_acc: 92.06%
Epoch 75 - loss: 0.9761, acc: 92.60% / test_loss: 0.9810, test_acc: 92.11%
Epoch 76 - loss: 0.9757, acc: 92.63% / test_loss: 0.9784, test_acc: 92.32%
Epoch 77 - loss: 0.9759, acc: 92.57% / test_loss: 0.9787, test_acc: 92.32%
Epoch 78 - loss: 0.9763, acc: 92.55% / test_loss: 0.9815, test_acc: 92.03%
Epoch 79 - loss: 0.9764, acc: 92.52% / test_loss: 0.9776, test_acc: 92.43%
Epoch 80 - loss: 0.9754, acc: 92.64% / test_loss: 0.9813, test_acc: 92.08%
Epoch 81 - loss: 0.9756, acc: 92.65% / test_loss: 0.9801, test_acc: 92.25%
Epoch 82 - loss: 0.9742, acc: 92.74% / test_loss: 0.9810, test_acc: 92.20%
Epoch 83 - loss: 0.9745, acc: 92.71% / test_loss: 0.9817, test_acc: 92.01%
Epoch 84 - loss: 0.9737, acc: 92.80% / test_loss: 0.9776, test_acc: 92.40%
Epoch 85 - loss: 0.9764, acc: 92.53% / test_loss: 0.9794, test_acc: 92.27%
Epoch 86 - loss: 0.9747, acc: 92.71% / test_loss: 0.9769, test_acc: 92.43%
Epoch 87 - loss: 0.9743, acc: 92.73% / test_loss: 0.9780, test_acc: 92.31%
Epoch 88 - loss: 0.9739, acc: 92.80% / test_loss: 0.9776, test_acc: 92.41%
Epoch 89 - loss: 0.9741, acc: 92.75% / test_loss: 0.9766, test_acc: 92.45%
Epoch 90 - loss: 0.9746, acc: 92.74% / test_loss: 0.9774, test_acc: 92.44%
Epoch 91 - loss: 0.9732, acc: 92.81% / test_loss: 0.9769, test_acc: 92.49%
Epoch 92 - loss: 0.9749, acc: 92.67% / test_loss: 0.9827, test_acc: 91.94%
Epoch 93 - loss: 0.9747, acc: 92.71% / test_loss: 0.9769, test_acc: 92.47%
Epoch 94 - loss: 0.9740, acc: 92.75% / test_loss: 0.9790, test_acc: 92.27%
Epoch 95 - loss: 0.9728, acc: 92.87% / test_loss: 0.9763, test_acc: 92.55%
Epoch 96 - loss: 0.9725, acc: 92.90% / test_loss: 0.9762, test_acc: 92.47%
Epoch 97 - loss: 0.9723, acc: 92.91% / test_loss: 0.9787, test_acc: 92.26%
Epoch 98 - loss: 0.9730, acc: 92.85% / test_loss: 0.9761, test_acc: 92.53%
Epoch 99 - loss: 0.9733, acc: 92.82% / test_loss: 0.9820, test_acc: 92.01%
Epoch 100 - loss: 0.9719, acc: 92.95% / test_loss: 0.9760, test_acc: 92.55%
Epoch 101 - loss: 0.9719, acc: 92.96% / test_loss: 0.9767, test_acc: 92.47%
Epoch 102 - loss: 0.9719, acc: 92.97% / test_loss: 0.9766, test_acc: 92.51%
Epoch 103 - loss: 0.9719, acc: 92.96% / test_loss: 0.9891, test_acc: 91.26%
Epoch 104 - loss: 0.9715, acc: 93.04% / test_loss: 0.9753, test_acc: 92.58%
Epoch 105 - loss: 0.9702, acc: 93.14% / test_loss: 0.9759, test_acc: 92.56%
Epoch 106 - loss: 0.9712, acc: 93.04% / test_loss: 0.9749, test_acc: 92.65%
Epoch 107 - loss: 0.9705, acc: 93.12% / test_loss: 0.9739, test_acc: 92.78%
Epoch 108 - loss: 0.9707, acc: 93.06% / test_loss: 0.9747, test_acc: 92.71%
Epoch 109 - loss: 0.9706, acc: 93.09% / test_loss: 0.9746, test_acc: 92.72%
Epoch 110 - loss: 0.9693, acc: 93.21% / test_loss: 0.9749, test_acc: 92.64%
Epoch 111 - loss: 0.9687, acc: 93.27% / test_loss: 0.9728, test_acc: 92.85%
Epoch 112 - loss: 0.9686, acc: 93.28% / test_loss: 0.9724, test_acc: 92.89%
Epoch 113 - loss: 0.9686, acc: 93.28% / test_loss: 0.9781, test_acc: 92.34%
Epoch 114 - loss: 0.9698, acc: 93.17% / test_loss: 0.9757, test_acc: 92.56%
Epoch 115 - loss: 0.9684, acc: 93.33% / test_loss: 0.9723, test_acc: 92.93%
Epoch 116 - loss: 0.9682, acc: 93.33% / test_loss: 0.9741, test_acc: 92.71%
Epoch 117 - loss: 0.9683, acc: 93.31% / test_loss: 0.9717, test_acc: 92.93%
Epoch 118 - loss: 0.9681, acc: 93.31% / test_loss: 0.9763, test_acc: 92.50%
Epoch 119 - loss: 0.9692, acc: 93.23% / test_loss: 0.9742, test_acc: 92.74%
Epoch 120 - loss: 0.9696, acc: 93.20% / test_loss: 0.9714, test_acc: 92.99%
Epoch 121 - loss: 0.9689, acc: 93.25% / test_loss: 0.9720, test_acc: 92.91%
Epoch 122 - loss: 0.9683, acc: 93.30% / test_loss: 0.9723, test_acc: 92.88%
Epoch 123 - loss: 0.9682, acc: 93.31% / test_loss: 0.9727, test_acc: 92.90%
Epoch 124 - loss: 0.9681, acc: 93.33% / test_loss: 0.9708, test_acc: 93.05%
Epoch 125 - loss: 0.9693, acc: 93.19% / test_loss: 0.9707, test_acc: 93.05%
Epoch 126 - loss: 0.9672, acc: 93.39% / test_loss: 0.9729, test_acc: 92.83%
Epoch 127 - loss: 0.9671, acc: 93.43% / test_loss: 0.9722, test_acc: 92.90%
Epoch 128 - loss: 0.9676, acc: 93.39% / test_loss: 0.9735, test_acc: 92.80%
Epoch 129 - loss: 0.9681, acc: 93.33% / test_loss: 0.9738, test_acc: 92.72%
Epoch 130 - loss: 0.9686, acc: 93.30% / test_loss: 0.9710, test_acc: 93.05%
Epoch 131 - loss: 0.9674, acc: 93.39% / test_loss: 0.9753, test_acc: 92.65%
Epoch 132 - loss: 0.9676, acc: 93.38% / test_loss: 0.9712, test_acc: 93.00%
Epoch 133 - loss: 0.9659, acc: 93.53% / test_loss: 0.9705, test_acc: 93.09%
Epoch 134 - loss: 0.9654, acc: 93.57% / test_loss: 0.9699, test_acc: 93.08%
Epoch 135 - loss: 0.9665, acc: 93.46% / test_loss: 0.9719, test_acc: 92.93%
Epoch 136 - loss: 0.9672, acc: 93.39% / test_loss: 0.9708, test_acc: 93.00%
Epoch 137 - loss: 0.9667, acc: 93.45% / test_loss: 0.9721, test_acc: 92.91%
Epoch 138 - loss: 0.9675, acc: 93.39% / test_loss: 0.9715, test_acc: 92.96%
Epoch 139 - loss: 0.9656, acc: 93.57% / test_loss: 0.9699, test_acc: 93.19%
Epoch 140 - loss: 0.9662, acc: 93.53% / test_loss: 0.9705, test_acc: 93.08%
Epoch 141 - loss: 0.9669, acc: 93.42% / test_loss: 0.9719, test_acc: 92.94%
Epoch 142 - loss: 0.9654, acc: 93.57% / test_loss: 0.9706, test_acc: 93.05%
Epoch 143 - loss: 0.9659, acc: 93.53% / test_loss: 0.9717, test_acc: 92.96%
Epoch 144 - loss: 0.9658, acc: 93.52% / test_loss: 0.9705, test_acc: 93.03%
Epoch 145 - loss: 0.9670, acc: 93.45% / test_loss: 0.9717, test_acc: 92.95%
Epoch 146 - loss: 0.9659, acc: 93.54% / test_loss: 0.9697, test_acc: 93.20%
Epoch 147 - loss: 0.9668, acc: 93.43% / test_loss: 0.9716, test_acc: 92.95%
Epoch 148 - loss: 0.9660, acc: 93.50% / test_loss: 0.9701, test_acc: 93.11%
Epoch 149 - loss: 0.9664, acc: 93.49% / test_loss: 0.9702, test_acc: 93.08%
Epoch 150 - loss: 0.9670, acc: 93.40% / test_loss: 0.9690, test_acc: 93.23%
Epoch 151 - loss: 0.9651, acc: 93.62% / test_loss: 0.9693, test_acc: 93.24%
Epoch 152 - loss: 0.9650, acc: 93.63% / test_loss: 0.9701, test_acc: 93.17%
Epoch 153 - loss: 0.9650, acc: 93.61% / test_loss: 0.9695, test_acc: 93.17%
Epoch 154 - loss: 0.9647, acc: 93.67% / test_loss: 0.9703, test_acc: 93.07%
Epoch 155 - loss: 0.9652, acc: 93.62% / test_loss: 0.9698, test_acc: 93.13%
Epoch 156 - loss: 0.9651, acc: 93.61% / test_loss: 0.9691, test_acc: 93.19%
Epoch 157 - loss: 0.9649, acc: 93.62% / test_loss: 0.9694, test_acc: 93.16%
Epoch 158 - loss: 0.9681, acc: 93.30% / test_loss: 0.9737, test_acc: 92.74%
Epoch 159 - loss: 0.9657, acc: 93.55% / test_loss: 0.9698, test_acc: 93.13%
Epoch 160 - loss: 0.9655, acc: 93.57% / test_loss: 0.9700, test_acc: 93.13%
Epoch 161 - loss: 0.9650, acc: 93.61% / test_loss: 0.9701, test_acc: 93.14%
Epoch 162 - loss: 0.9640, acc: 93.70% / test_loss: 0.9694, test_acc: 93.16%
Epoch 163 - loss: 0.9656, acc: 93.54% / test_loss: 0.9714, test_acc: 93.00%
Epoch 164 - loss: 0.9648, acc: 93.64% / test_loss: 0.9692, test_acc: 93.24%
Epoch 165 - loss: 0.9647, acc: 93.67% / test_loss: 0.9700, test_acc: 93.15%
Epoch 166 - loss: 0.9646, acc: 93.67% / test_loss: 0.9710, test_acc: 93.01%
Epoch 167 - loss: 0.9651, acc: 93.60% / test_loss: 0.9704, test_acc: 93.06%
Epoch 168 - loss: 0.9649, acc: 93.61% / test_loss: 0.9717, test_acc: 92.96%
Epoch 169 - loss: 0.9636, acc: 93.74% / test_loss: 0.9703, test_acc: 93.12%
Epoch 170 - loss: 0.9646, acc: 93.63% / test_loss: 0.9694, test_acc: 93.16%
Epoch 171 - loss: 0.9648, acc: 93.60% / test_loss: 0.9692, test_acc: 93.19%
Epoch 172 - loss: 0.9632, acc: 93.79% / test_loss: 0.9679, test_acc: 93.36%
Epoch 173 - loss: 0.9625, acc: 93.82% / test_loss: 0.9675, test_acc: 93.36%
Epoch 174 - loss: 0.9633, acc: 93.77% / test_loss: 0.9687, test_acc: 93.27%
Epoch 175 - loss: 0.9639, acc: 93.70% / test_loss: 0.9698, test_acc: 93.16%
Epoch 176 - loss: 0.9642, acc: 93.70% / test_loss: 0.9674, test_acc: 93.36%
Epoch 177 - loss: 0.9635, acc: 93.77% / test_loss: 0.9699, test_acc: 93.06%
Epoch 178 - loss: 0.9640, acc: 93.70% / test_loss: 0.9683, test_acc: 93.27%
Epoch 179 - loss: 0.9628, acc: 93.84% / test_loss: 0.9686, test_acc: 93.21%
Epoch 180 - loss: 0.9637, acc: 93.71% / test_loss: 0.9681, test_acc: 93.32%
Epoch 181 - loss: 0.9652, acc: 93.58% / test_loss: 0.9680, test_acc: 93.34%
Epoch 182 - loss: 0.9628, acc: 93.82% / test_loss: 0.9723, test_acc: 92.86%
Epoch 183 - loss: 0.9638, acc: 93.72% / test_loss: 0.9672, test_acc: 93.37%
Epoch 184 - loss: 0.9625, acc: 93.84% / test_loss: 0.9678, test_acc: 93.30%
Epoch 185 - loss: 0.9643, acc: 93.66% / test_loss: 0.9752, test_acc: 92.53%
Epoch 186 - loss: 0.9638, acc: 93.74% / test_loss: 0.9687, test_acc: 93.24%
Epoch 187 - loss: 0.9632, acc: 93.76% / test_loss: 0.9673, test_acc: 93.36%
Epoch 188 - loss: 0.9635, acc: 93.74% / test_loss: 0.9696, test_acc: 93.14%
Epoch 189 - loss: 0.9631, acc: 93.80% / test_loss: 0.9683, test_acc: 93.24%
Epoch 190 - loss: 0.9631, acc: 93.80% / test_loss: 0.9672, test_acc: 93.37%
Epoch 191 - loss: 0.9631, acc: 93.79% / test_loss: 0.9682, test_acc: 93.27%
Epoch 192 - loss: 0.9625, acc: 93.86% / test_loss: 0.9673, test_acc: 93.36%
Epoch 193 - loss: 0.9622, acc: 93.88% / test_loss: 0.9683, test_acc: 93.29%
Epoch 194 - loss: 0.9633, acc: 93.77% / test_loss: 0.9683, test_acc: 93.28%
Epoch 195 - loss: 0.9617, acc: 93.88% / test_loss: 0.9665, test_acc: 93.45%
Epoch 196 - loss: 0.9635, acc: 93.74% / test_loss: 0.9697, test_acc: 93.17%
Epoch 197 - loss: 0.9640, acc: 93.69% / test_loss: 0.9693, test_acc: 93.13%
Epoch 198 - loss: 0.9622, acc: 93.88% / test_loss: 0.9688, test_acc: 93.23%
Epoch 199 - loss: 0.9624, acc: 93.87% / test_loss: 0.9674, test_acc: 93.37%
Epoch 200 - loss: 0.9624, acc: 93.85% / test_loss: 0.9665, test_acc: 93.45%
Epoch 201 - loss: 0.9626, acc: 93.82% / test_loss: 0.9665, test_acc: 93.45%
Epoch 202 - loss: 0.9617, acc: 93.90% / test_loss: 0.9693, test_acc: 93.12%
Epoch 203 - loss: 0.9621, acc: 93.93% / test_loss: 0.9668, test_acc: 93.39%
Epoch 204 - loss: 0.9614, acc: 93.94% / test_loss: 0.9662, test_acc: 93.51%
Epoch 205 - loss: 0.9616, acc: 93.94% / test_loss: 0.9673, test_acc: 93.36%
Epoch 206 - loss: 0.9617, acc: 93.91% / test_loss: 0.9689, test_acc: 93.20%
Epoch 207 - loss: 0.9621, acc: 93.90% / test_loss: 0.9678, test_acc: 93.30%
Epoch 208 - loss: 0.9623, acc: 93.85% / test_loss: 0.9671, test_acc: 93.39%
Epoch 209 - loss: 0.9614, acc: 93.94% / test_loss: 0.9691, test_acc: 93.19%
Epoch 210 - loss: 0.9621, acc: 93.88% / test_loss: 0.9679, test_acc: 93.27%
Epoch 211 - loss: 0.9618, acc: 93.91% / test_loss: 0.9682, test_acc: 93.27%
Epoch 212 - loss: 0.9638, acc: 93.72% / test_loss: 0.9695, test_acc: 93.20%
Epoch 213 - loss: 0.9623, acc: 93.87% / test_loss: 0.9691, test_acc: 93.18%
Epoch 214 - loss: 0.9613, acc: 93.95% / test_loss: 0.9664, test_acc: 93.44%
Epoch 215 - loss: 0.9613, acc: 93.95% / test_loss: 0.9672, test_acc: 93.40%
Epoch 216 - loss: 0.9616, acc: 93.95% / test_loss: 0.9688, test_acc: 93.24%
Epoch 217 - loss: 0.9619, acc: 93.92% / test_loss: 0.9675, test_acc: 93.35%
Epoch 218 - loss: 0.9622, acc: 93.88% / test_loss: 0.9681, test_acc: 93.32%
Epoch 219 - loss: 0.9630, acc: 93.76% / test_loss: 0.9686, test_acc: 93.21%
Epoch 220 - loss: 0.9618, acc: 93.90% / test_loss: 0.9669, test_acc: 93.40%
Epoch 221 - loss: 0.9609, acc: 93.98% / test_loss: 0.9674, test_acc: 93.37%
Epoch 222 - loss: 0.9616, acc: 93.94% / test_loss: 0.9660, test_acc: 93.50%
Epoch 223 - loss: 0.9609, acc: 93.99% / test_loss: 0.9670, test_acc: 93.42%
Epoch 224 - loss: 0.9621, acc: 93.86% / test_loss: 0.9676, test_acc: 93.30%
Epoch 225 - loss: 0.9608, acc: 94.01% / test_loss: 0.9663, test_acc: 93.45%
Epoch 226 - loss: 0.9612, acc: 93.98% / test_loss: 0.9677, test_acc: 93.28%
Epoch 227 - loss: 0.9605, acc: 94.01% / test_loss: 0.9665, test_acc: 93.42%
Epoch 228 - loss: 0.9618, acc: 93.89% / test_loss: 0.9673, test_acc: 93.36%
Epoch 229 - loss: 0.9622, acc: 93.85% / test_loss: 0.9685, test_acc: 93.27%
Epoch 230 - loss: 0.9623, acc: 93.88% / test_loss: 0.9669, test_acc: 93.41%
Epoch 231 - loss: 0.9626, acc: 93.84% / test_loss: 0.9687, test_acc: 93.24%
Epoch 232 - loss: 0.9617, acc: 93.92% / test_loss: 0.9671, test_acc: 93.39%
Epoch 233 - loss: 0.9637, acc: 93.73% / test_loss: 0.9681, test_acc: 93.29%
Epoch 234 - loss: 0.9614, acc: 93.96% / test_loss: 0.9662, test_acc: 93.50%
Epoch 235 - loss: 0.9613, acc: 93.94% / test_loss: 0.9676, test_acc: 93.32%
Epoch 236 - loss: 0.9616, acc: 93.93% / test_loss: 0.9678, test_acc: 93.33%
Epoch 237 - loss: 0.9614, acc: 93.94% / test_loss: 0.9667, test_acc: 93.41%
Epoch 238 - loss: 0.9609, acc: 93.99% / test_loss: 0.9668, test_acc: 93.43%
Epoch 239 - loss: 0.9610, acc: 93.97% / test_loss: 0.9721, test_acc: 92.89%
Epoch 240 - loss: 0.9615, acc: 93.93% / test_loss: 0.9663, test_acc: 93.46%
Epoch 241 - loss: 0.9612, acc: 93.94% / test_loss: 0.9669, test_acc: 93.40%
Epoch 242 - loss: 0.9628, acc: 93.80% / test_loss: 0.9713, test_acc: 92.99%
Epoch 243 - loss: 0.9623, acc: 93.86% / test_loss: 0.9669, test_acc: 93.41%
Epoch 244 - loss: 0.9613, acc: 93.93% / test_loss: 0.9670, test_acc: 93.42%
Epoch 245 - loss: 0.9614, acc: 93.95% / test_loss: 0.9662, test_acc: 93.46%
Epoch 246 - loss: 0.9614, acc: 93.94% / test_loss: 0.9669, test_acc: 93.42%
Epoch 247 - loss: 0.9613, acc: 93.94% / test_loss: 0.9653, test_acc: 93.57%
Epoch 248 - loss: 0.9613, acc: 93.98% / test_loss: 0.9658, test_acc: 93.48%
Epoch 249 - loss: 0.9609, acc: 94.00% / test_loss: 0.9671, test_acc: 93.39%
Epoch 250 - loss: 0.9610, acc: 93.98% / test_loss: 0.9692, test_acc: 93.17%
Epoch 251 - loss: 0.9605, acc: 94.04% / test_loss: 0.9654, test_acc: 93.51%
Epoch 252 - loss: 0.9601, acc: 94.06% / test_loss: 0.9657, test_acc: 93.51%
Epoch 253 - loss: 0.9603, acc: 94.03% / test_loss: 0.9651, test_acc: 93.59%
Epoch 254 - loss: 0.9607, acc: 94.01% / test_loss: 0.9670, test_acc: 93.40%
Epoch 255 - loss: 0.9612, acc: 93.98% / test_loss: 0.9661, test_acc: 93.48%
Epoch 256 - loss: 0.9620, acc: 93.88% / test_loss: 0.9684, test_acc: 93.24%
Epoch 257 - loss: 0.9611, acc: 93.98% / test_loss: 0.9685, test_acc: 93.24%
Epoch 258 - loss: 0.9616, acc: 93.91% / test_loss: 0.9679, test_acc: 93.33%
Epoch 259 - loss: 0.9624, acc: 93.82% / test_loss: 0.9676, test_acc: 93.31%
Epoch 260 - loss: 0.9622, acc: 93.87% / test_loss: 0.9673, test_acc: 93.37%
Epoch 261 - loss: 0.9602, acc: 94.07% / test_loss: 0.9657, test_acc: 93.51%
Epoch 262 - loss: 0.9612, acc: 93.96% / test_loss: 0.9655, test_acc: 93.51%
Epoch 263 - loss: 0.9616, acc: 93.91% / test_loss: 0.9709, test_acc: 93.01%
Epoch 264 - loss: 0.9614, acc: 93.94% / test_loss: 0.9657, test_acc: 93.50%
Epoch 265 - loss: 0.9604, acc: 94.02% / test_loss: 0.9669, test_acc: 93.42%
Epoch 266 - loss: 0.9400, acc: 96.31% / test_loss: 0.9256, test_acc: 97.89%
Epoch 267 - loss: 0.9174, acc: 98.74% / test_loss: 0.9248, test_acc: 97.97%
Epoch 268 - loss: 0.9178, acc: 98.69% / test_loss: 0.9238, test_acc: 98.08%
Epoch 269 - loss: 0.9186, acc: 98.62% / test_loss: 0.9257, test_acc: 97.90%
Epoch 270 - loss: 0.9184, acc: 98.61% / test_loss: 0.9233, test_acc: 98.16%
Epoch 271 - loss: 0.9165, acc: 98.82% / test_loss: 0.9241, test_acc: 98.04%
Epoch 272 - loss: 0.9173, acc: 98.74% / test_loss: 0.9234, test_acc: 98.14%
Epoch 273 - loss: 0.9187, acc: 98.60% / test_loss: 0.9262, test_acc: 97.86%
Epoch 274 - loss: 0.9183, acc: 98.63% / test_loss: 0.9258, test_acc: 97.89%
Epoch 275 - loss: 0.9182, acc: 98.67% / test_loss: 0.9230, test_acc: 98.17%
Epoch 276 - loss: 0.9169, acc: 98.78% / test_loss: 0.9225, test_acc: 98.20%
Epoch 277 - loss: 0.9167, acc: 98.79% / test_loss: 0.9279, test_acc: 97.67%
Epoch 278 - loss: 0.9176, acc: 98.70% / test_loss: 0.9242, test_acc: 98.05%
Epoch 279 - loss: 0.9181, acc: 98.66% / test_loss: 0.9235, test_acc: 98.12%
Epoch 280 - loss: 0.9163, acc: 98.83% / test_loss: 0.9229, test_acc: 98.16%
Epoch 281 - loss: 0.9165, acc: 98.80% / test_loss: 0.9253, test_acc: 97.95%
Epoch 282 - loss: 0.9182, acc: 98.65% / test_loss: 0.9290, test_acc: 97.58%
Epoch 283 - loss: 0.9171, acc: 98.77% / test_loss: 0.9237, test_acc: 98.10%
Epoch 284 - loss: 0.9169, acc: 98.78% / test_loss: 0.9245, test_acc: 98.01%
Epoch 285 - loss: 0.9167, acc: 98.78% / test_loss: 0.9236, test_acc: 98.09%
Epoch 286 - loss: 0.9169, acc: 98.78% / test_loss: 0.9242, test_acc: 98.04%
Epoch 287 - loss: 0.9160, acc: 98.87% / test_loss: 0.9234, test_acc: 98.12%
Epoch 288 - loss: 0.9171, acc: 98.77% / test_loss: 0.9244, test_acc: 98.02%
Epoch 289 - loss: 0.9171, acc: 98.78% / test_loss: 0.9235, test_acc: 98.14%
Epoch 290 - loss: 0.9169, acc: 98.77% / test_loss: 0.9246, test_acc: 98.01%
Epoch 291 - loss: 0.9177, acc: 98.70% / test_loss: 0.9245, test_acc: 98.00%
Epoch 292 - loss: 0.9171, acc: 98.76% / test_loss: 0.9257, test_acc: 97.93%
Epoch 293 - loss: 0.9177, acc: 98.68% / test_loss: 0.9226, test_acc: 98.23%
Epoch 294 - loss: 0.9145, acc: 99.03% / test_loss: 0.9242, test_acc: 98.03%
Epoch 295 - loss: 0.9133, acc: 99.15% / test_loss: 0.9209, test_acc: 98.38%
Epoch 296 - loss: 0.9143, acc: 99.09% / test_loss: 0.9253, test_acc: 97.98%
Epoch 297 - loss: 0.9123, acc: 99.28% / test_loss: 0.9211, test_acc: 98.40%
Epoch 298 - loss: 0.9132, acc: 99.16% / test_loss: 0.9212, test_acc: 98.33%
Epoch 299 - loss: 0.9144, acc: 99.04% / test_loss: 0.9212, test_acc: 98.33%
Epoch 300 - loss: 0.9129, acc: 99.19% / test_loss: 0.9219, test_acc: 98.31%
Epoch 301 - loss: 0.9133, acc: 99.15% / test_loss: 0.9237, test_acc: 98.10%
Epoch 302 - loss: 0.9135, acc: 99.15% / test_loss: 0.9259, test_acc: 97.86%
Epoch 303 - loss: 0.9148, acc: 99.01% / test_loss: 0.9216, test_acc: 98.30%
Epoch 304 - loss: 0.9136, acc: 99.13% / test_loss: 0.9231, test_acc: 98.18%
Epoch 305 - loss: 0.9130, acc: 99.19% / test_loss: 0.9217, test_acc: 98.30%
Epoch 306 - loss: 0.9140, acc: 99.07% / test_loss: 0.9241, test_acc: 98.04%
Epoch 307 - loss: 0.9125, acc: 99.24% / test_loss: 0.9208, test_acc: 98.41%
Epoch 308 - loss: 0.9122, acc: 99.26% / test_loss: 0.9216, test_acc: 98.32%
Epoch 309 - loss: 0.9119, acc: 99.31% / test_loss: 0.9204, test_acc: 98.45%
Epoch 310 - loss: 0.9131, acc: 99.18% / test_loss: 0.9216, test_acc: 98.32%
Epoch 311 - loss: 0.9143, acc: 99.05% / test_loss: 0.9214, test_acc: 98.35%
Epoch 312 - loss: 0.9128, acc: 99.20% / test_loss: 0.9224, test_acc: 98.24%
Epoch 313 - loss: 0.9128, acc: 99.21% / test_loss: 0.9206, test_acc: 98.42%
Epoch 314 - loss: 0.9131, acc: 99.17% / test_loss: 0.9243, test_acc: 98.03%
Epoch 315 - loss: 0.9129, acc: 99.20% / test_loss: 0.9212, test_acc: 98.37%
Epoch 316 - loss: 0.9125, acc: 99.24% / test_loss: 0.9262, test_acc: 97.86%
Epoch 317 - loss: 0.9133, acc: 99.16% / test_loss: 0.9206, test_acc: 98.39%
Epoch 318 - loss: 0.9132, acc: 99.15% / test_loss: 0.9206, test_acc: 98.43%
Epoch 319 - loss: 0.9130, acc: 99.18% / test_loss: 0.9215, test_acc: 98.35%
Epoch 320 - loss: 0.9125, acc: 99.24% / test_loss: 0.9210, test_acc: 98.38%
Epoch 321 - loss: 0.9133, acc: 99.15% / test_loss: 0.9215, test_acc: 98.35%
Epoch 322 - loss: 0.9130, acc: 99.18% / test_loss: 0.9210, test_acc: 98.35%
Epoch 323 - loss: 0.9123, acc: 99.27% / test_loss: 0.9204, test_acc: 98.44%
Epoch 324 - loss: 0.9128, acc: 99.21% / test_loss: 0.9222, test_acc: 98.27%
Epoch 325 - loss: 0.9139, acc: 99.09% / test_loss: 0.9210, test_acc: 98.40%
Epoch 326 - loss: 0.9130, acc: 99.18% / test_loss: 0.9209, test_acc: 98.37%
Epoch 327 - loss: 0.9117, acc: 99.32% / test_loss: 0.9216, test_acc: 98.32%
Epoch 328 - loss: 0.9151, acc: 98.96% / test_loss: 0.9231, test_acc: 98.19%
Epoch 329 - loss: 0.9123, acc: 99.25% / test_loss: 0.9215, test_acc: 98.33%
Epoch 330 - loss: 0.9125, acc: 99.24% / test_loss: 0.9217, test_acc: 98.33%
Epoch 331 - loss: 0.9124, acc: 99.26% / test_loss: 0.9211, test_acc: 98.38%
Epoch 332 - loss: 0.9116, acc: 99.34% / test_loss: 0.9206, test_acc: 98.40%
Epoch 333 - loss: 0.9126, acc: 99.24% / test_loss: 0.9214, test_acc: 98.37%
Epoch 334 - loss: 0.9132, acc: 99.15% / test_loss: 0.9209, test_acc: 98.38%
Epoch 335 - loss: 0.9125, acc: 99.24% / test_loss: 0.9212, test_acc: 98.32%
Epoch 336 - loss: 0.9132, acc: 99.16% / test_loss: 0.9201, test_acc: 98.47%
Epoch 337 - loss: 0.9126, acc: 99.23% / test_loss: 0.9201, test_acc: 98.50%
Epoch 338 - loss: 0.9131, acc: 99.17% / test_loss: 0.9208, test_acc: 98.39%
Epoch 339 - loss: 0.9119, acc: 99.31% / test_loss: 0.9204, test_acc: 98.42%
Epoch 340 - loss: 0.9120, acc: 99.29% / test_loss: 0.9206, test_acc: 98.44%
Epoch 341 - loss: 0.9121, acc: 99.28% / test_loss: 0.9213, test_acc: 98.32%
Epoch 342 - loss: 0.9136, acc: 99.12% / test_loss: 0.9217, test_acc: 98.29%
Epoch 343 - loss: 0.9133, acc: 99.17% / test_loss: 0.9211, test_acc: 98.37%
Epoch 344 - loss: 0.9124, acc: 99.26% / test_loss: 0.9203, test_acc: 98.40%
Epoch 345 - loss: 0.9121, acc: 99.28% / test_loss: 0.9233, test_acc: 98.13%
Epoch 346 - loss: 0.9126, acc: 99.23% / test_loss: 0.9197, test_acc: 98.51%
Epoch 347 - loss: 0.9116, acc: 99.32% / test_loss: 0.9210, test_acc: 98.35%
Epoch 348 - loss: 0.9137, acc: 99.12% / test_loss: 0.9231, test_acc: 98.16%
Epoch 349 - loss: 0.9129, acc: 99.18% / test_loss: 0.9200, test_acc: 98.47%
Epoch 350 - loss: 0.9122, acc: 99.24% / test_loss: 0.9210, test_acc: 98.37%
Epoch 351 - loss: 0.9124, acc: 99.23% / test_loss: 0.9201, test_acc: 98.46%
Epoch 352 - loss: 0.9119, acc: 99.31% / test_loss: 0.9228, test_acc: 98.23%
Epoch 353 - loss: 0.9128, acc: 99.21% / test_loss: 0.9229, test_acc: 98.19%
Epoch 354 - loss: 0.9126, acc: 99.24% / test_loss: 0.9232, test_acc: 98.14%
Epoch 355 - loss: 0.9128, acc: 99.21% / test_loss: 0.9229, test_acc: 98.21%
Epoch 356 - loss: 0.9127, acc: 99.22% / test_loss: 0.9206, test_acc: 98.43%
Epoch 357 - loss: 0.9125, acc: 99.24% / test_loss: 0.9200, test_acc: 98.50%
Epoch 358 - loss: 0.9141, acc: 99.06% / test_loss: 0.9215, test_acc: 98.32%
Epoch 359 - loss: 0.9119, acc: 99.30% / test_loss: 0.9214, test_acc: 98.35%
Epoch 360 - loss: 0.9127, acc: 99.22% / test_loss: 0.9228, test_acc: 98.23%
Epoch 361 - loss: 0.9124, acc: 99.25% / test_loss: 0.9207, test_acc: 98.39%
Epoch 362 - loss: 0.9117, acc: 99.31% / test_loss: 0.9203, test_acc: 98.42%
Epoch 363 - loss: 0.9122, acc: 99.27% / test_loss: 0.9199, test_acc: 98.47%
Epoch 364 - loss: 0.9135, acc: 99.15% / test_loss: 0.9225, test_acc: 98.22%
Epoch 365 - loss: 0.9131, acc: 99.17% / test_loss: 0.9205, test_acc: 98.41%
Epoch 366 - loss: 0.9142, acc: 99.06% / test_loss: 0.9208, test_acc: 98.41%
Epoch 367 - loss: 0.9120, acc: 99.28% / test_loss: 0.9230, test_acc: 98.17%
Epoch 368 - loss: 0.9125, acc: 99.24% / test_loss: 0.9208, test_acc: 98.39%
Epoch 369 - loss: 0.9133, acc: 99.14% / test_loss: 0.9216, test_acc: 98.30%
Epoch 370 - loss: 0.9125, acc: 99.23% / test_loss: 0.9209, test_acc: 98.38%
Epoch 371 - loss: 0.9121, acc: 99.28% / test_loss: 0.9210, test_acc: 98.36%
Epoch 372 - loss: 0.9141, acc: 99.06% / test_loss: 0.9208, test_acc: 98.42%
Epoch 373 - loss: 0.9124, acc: 99.25% / test_loss: 0.9203, test_acc: 98.44%
Epoch 374 - loss: 0.9117, acc: 99.31% / test_loss: 0.9196, test_acc: 98.51%
Epoch 375 - loss: 0.9116, acc: 99.32% / test_loss: 0.9225, test_acc: 98.23%
Epoch 376 - loss: 0.9116, acc: 99.32% / test_loss: 0.9205, test_acc: 98.46%
Epoch 377 - loss: 0.9126, acc: 99.23% / test_loss: 0.9210, test_acc: 98.36%
Epoch 378 - loss: 0.9140, acc: 99.07% / test_loss: 0.9225, test_acc: 98.23%
Epoch 379 - loss: 0.9136, acc: 99.12% / test_loss: 0.9213, test_acc: 98.35%
Epoch 380 - loss: 0.9137, acc: 99.12% / test_loss: 0.9219, test_acc: 98.32%
Epoch 381 - loss: 0.9128, acc: 99.21% / test_loss: 0.9220, test_acc: 98.29%
Epoch 382 - loss: 0.9128, acc: 99.21% / test_loss: 0.9209, test_acc: 98.38%
Epoch 383 - loss: 0.9121, acc: 99.28% / test_loss: 0.9218, test_acc: 98.30%
Epoch 384 - loss: 0.9120, acc: 99.28% / test_loss: 0.9216, test_acc: 98.33%
Epoch 385 - loss: 0.9121, acc: 99.27% / test_loss: 0.9204, test_acc: 98.44%
Epoch 386 - loss: 0.9119, acc: 99.31% / test_loss: 0.9227, test_acc: 98.20%
Epoch 387 - loss: 0.9129, acc: 99.18% / test_loss: 0.9220, test_acc: 98.26%
Epoch 388 - loss: 0.9140, acc: 99.07% / test_loss: 0.9214, test_acc: 98.35%
Epoch 389 - loss: 0.9122, acc: 99.27% / test_loss: 0.9198, test_acc: 98.49%
Epoch 390 - loss: 0.9115, acc: 99.33% / test_loss: 0.9201, test_acc: 98.45%
Epoch 391 - loss: 0.9119, acc: 99.29% / test_loss: 0.9205, test_acc: 98.41%
Epoch 392 - loss: 0.9116, acc: 99.34% / test_loss: 0.9204, test_acc: 98.45%
Epoch 393 - loss: 0.9125, acc: 99.24% / test_loss: 0.9251, test_acc: 97.98%
Epoch 394 - loss: 0.9152, acc: 98.97% / test_loss: 0.9218, test_acc: 98.30%
Epoch 395 - loss: 0.9149, acc: 98.96% / test_loss: 0.9230, test_acc: 98.16%
Epoch 396 - loss: 0.9127, acc: 99.21% / test_loss: 0.9226, test_acc: 98.24%
Epoch 397 - loss: 0.9120, acc: 99.28% / test_loss: 0.9210, test_acc: 98.38%
Epoch 398 - loss: 0.9124, acc: 99.24% / test_loss: 0.9217, test_acc: 98.32%
Epoch 399 - loss: 0.9120, acc: 99.28% / test_loss: 0.9206, test_acc: 98.40%
Epoch 400 - loss: 0.9132, acc: 99.15% / test_loss: 0.9220, test_acc: 98.28%
Best test accuracy 98.51% in epoch 346.
----------------------------------------------------------------------------------------------------
Run 8
Epoch 1 - loss: 1.3461, acc: 55.67% / test_loss: 1.1790, test_acc: 72.85%
Epoch 2 - loss: 1.1469, acc: 76.53% / test_loss: 1.0799, test_acc: 83.56%
Epoch 3 - loss: 1.0654, acc: 84.58% / test_loss: 1.0582, test_acc: 85.41%
Epoch 4 - loss: 1.0503, acc: 85.70% / test_loss: 1.0367, test_acc: 86.92%
Epoch 5 - loss: 1.0421, acc: 86.48% / test_loss: 1.0332, test_acc: 87.21%
Epoch 6 - loss: 1.0365, acc: 86.89% / test_loss: 1.0269, test_acc: 88.00%
Epoch 7 - loss: 1.0318, acc: 87.41% / test_loss: 1.0298, test_acc: 87.74%
Epoch 8 - loss: 1.0270, acc: 87.83% / test_loss: 1.0185, test_acc: 88.76%
Epoch 9 - loss: 1.0264, acc: 87.87% / test_loss: 1.0180, test_acc: 88.76%
Epoch 10 - loss: 1.0227, acc: 88.21% / test_loss: 1.0125, test_acc: 89.17%
Epoch 11 - loss: 1.0195, acc: 88.46% / test_loss: 1.0148, test_acc: 88.92%
Epoch 12 - loss: 1.0186, acc: 88.55% / test_loss: 1.0163, test_acc: 88.79%
Epoch 13 - loss: 1.0171, acc: 88.69% / test_loss: 1.0154, test_acc: 88.89%
Epoch 14 - loss: 1.0155, acc: 88.89% / test_loss: 1.0098, test_acc: 89.39%
Epoch 15 - loss: 1.0155, acc: 88.80% / test_loss: 1.0095, test_acc: 89.41%
Epoch 16 - loss: 1.0132, acc: 89.00% / test_loss: 1.0077, test_acc: 89.55%
Epoch 17 - loss: 1.0132, acc: 89.07% / test_loss: 1.0065, test_acc: 89.70%
Epoch 18 - loss: 1.0131, acc: 88.99% / test_loss: 1.0102, test_acc: 89.32%
Epoch 19 - loss: 1.0130, acc: 89.08% / test_loss: 1.0060, test_acc: 89.65%
Epoch 20 - loss: 1.0115, acc: 89.20% / test_loss: 1.0057, test_acc: 89.80%
Epoch 21 - loss: 1.0097, acc: 89.36% / test_loss: 1.0084, test_acc: 89.72%
Epoch 22 - loss: 1.0079, acc: 89.57% / test_loss: 1.0042, test_acc: 89.97%
Epoch 23 - loss: 1.0084, acc: 89.50% / test_loss: 1.0037, test_acc: 90.09%
Epoch 24 - loss: 1.0066, acc: 89.66% / test_loss: 1.0054, test_acc: 89.75%
Epoch 25 - loss: 1.0057, acc: 89.74% / test_loss: 1.0123, test_acc: 89.41%
Epoch 26 - loss: 1.0053, acc: 89.82% / test_loss: 1.0015, test_acc: 90.17%
Epoch 27 - loss: 1.0046, acc: 89.85% / test_loss: 1.0025, test_acc: 90.12%
Epoch 28 - loss: 1.0030, acc: 89.97% / test_loss: 1.0001, test_acc: 90.22%
Epoch 29 - loss: 1.0049, acc: 89.85% / test_loss: 1.0014, test_acc: 90.15%
Epoch 30 - loss: 1.0028, acc: 90.03% / test_loss: 0.9991, test_acc: 90.43%
Epoch 31 - loss: 1.0030, acc: 90.01% / test_loss: 0.9987, test_acc: 90.37%
Epoch 32 - loss: 1.0025, acc: 90.06% / test_loss: 0.9985, test_acc: 90.41%
Epoch 33 - loss: 1.0014, acc: 90.18% / test_loss: 0.9982, test_acc: 90.43%
Epoch 34 - loss: 1.0035, acc: 89.92% / test_loss: 0.9987, test_acc: 90.37%
Epoch 35 - loss: 1.0043, acc: 89.88% / test_loss: 0.9982, test_acc: 90.38%
Epoch 36 - loss: 1.0018, acc: 90.09% / test_loss: 0.9980, test_acc: 90.44%
Epoch 37 - loss: 1.0011, acc: 90.11% / test_loss: 1.0001, test_acc: 90.39%
Epoch 38 - loss: 1.0009, acc: 90.18% / test_loss: 0.9993, test_acc: 90.32%
Epoch 39 - loss: 1.0001, acc: 90.25% / test_loss: 0.9986, test_acc: 90.35%
Epoch 40 - loss: 0.9990, acc: 90.33% / test_loss: 0.9984, test_acc: 90.39%
Epoch 41 - loss: 0.9996, acc: 90.32% / test_loss: 0.9996, test_acc: 90.27%
Epoch 42 - loss: 0.9991, acc: 90.34% / test_loss: 0.9968, test_acc: 90.56%
Epoch 43 - loss: 0.9992, acc: 90.33% / test_loss: 0.9975, test_acc: 90.54%
Epoch 44 - loss: 0.9954, acc: 90.71% / test_loss: 0.9927, test_acc: 90.91%
Epoch 45 - loss: 0.9949, acc: 90.72% / test_loss: 0.9925, test_acc: 90.96%
Epoch 46 - loss: 0.9916, acc: 91.09% / test_loss: 0.9974, test_acc: 90.74%
Epoch 47 - loss: 0.9894, acc: 91.29% / test_loss: 0.9904, test_acc: 91.30%
Epoch 48 - loss: 0.9879, acc: 91.46% / test_loss: 0.9870, test_acc: 91.52%
Epoch 49 - loss: 0.9851, acc: 91.75% / test_loss: 0.9847, test_acc: 91.73%
Epoch 50 - loss: 0.9827, acc: 91.95% / test_loss: 0.9833, test_acc: 91.85%
Epoch 51 - loss: 0.9824, acc: 91.98% / test_loss: 0.9871, test_acc: 91.57%
Epoch 52 - loss: 0.9813, acc: 92.07% / test_loss: 0.9818, test_acc: 91.97%
Epoch 53 - loss: 0.9808, acc: 92.13% / test_loss: 0.9826, test_acc: 91.99%
Epoch 54 - loss: 0.9798, acc: 92.25% / test_loss: 0.9831, test_acc: 91.97%
Epoch 55 - loss: 0.9792, acc: 92.30% / test_loss: 0.9814, test_acc: 92.11%
Epoch 56 - loss: 0.9772, acc: 92.48% / test_loss: 0.9807, test_acc: 92.13%
Epoch 57 - loss: 0.9777, acc: 92.48% / test_loss: 0.9806, test_acc: 92.12%
Epoch 58 - loss: 0.9785, acc: 92.37% / test_loss: 0.9813, test_acc: 92.10%
Epoch 59 - loss: 0.9779, acc: 92.39% / test_loss: 0.9805, test_acc: 92.19%
Epoch 60 - loss: 0.9780, acc: 92.43% / test_loss: 0.9782, test_acc: 92.40%
Epoch 61 - loss: 0.9757, acc: 92.59% / test_loss: 0.9816, test_acc: 92.05%
Epoch 62 - loss: 0.9772, acc: 92.48% / test_loss: 0.9779, test_acc: 92.40%
Epoch 63 - loss: 0.9759, acc: 92.64% / test_loss: 0.9797, test_acc: 92.22%
Epoch 64 - loss: 0.9764, acc: 92.57% / test_loss: 0.9836, test_acc: 91.79%
Epoch 65 - loss: 0.9750, acc: 92.69% / test_loss: 0.9839, test_acc: 91.95%
Epoch 66 - loss: 0.9760, acc: 92.60% / test_loss: 0.9794, test_acc: 92.30%
Epoch 67 - loss: 0.9745, acc: 92.74% / test_loss: 0.9767, test_acc: 92.56%
Epoch 68 - loss: 0.9735, acc: 92.86% / test_loss: 0.9791, test_acc: 92.36%
Epoch 69 - loss: 0.9742, acc: 92.77% / test_loss: 0.9801, test_acc: 92.25%
Epoch 70 - loss: 0.9741, acc: 92.77% / test_loss: 0.9794, test_acc: 92.22%
Epoch 71 - loss: 0.9736, acc: 92.84% / test_loss: 0.9780, test_acc: 92.39%
Epoch 72 - loss: 0.9748, acc: 92.70% / test_loss: 0.9808, test_acc: 92.03%
Epoch 73 - loss: 0.9739, acc: 92.75% / test_loss: 0.9775, test_acc: 92.39%
Epoch 74 - loss: 0.9734, acc: 92.87% / test_loss: 0.9777, test_acc: 92.43%
Epoch 75 - loss: 0.9734, acc: 92.84% / test_loss: 0.9768, test_acc: 92.43%
Epoch 76 - loss: 0.9729, acc: 92.90% / test_loss: 0.9760, test_acc: 92.57%
Epoch 77 - loss: 0.9738, acc: 92.77% / test_loss: 0.9759, test_acc: 92.51%
Epoch 78 - loss: 0.9723, acc: 92.96% / test_loss: 0.9765, test_acc: 92.59%
Epoch 79 - loss: 0.9730, acc: 92.87% / test_loss: 0.9777, test_acc: 92.39%
Epoch 80 - loss: 0.9751, acc: 92.71% / test_loss: 0.9790, test_acc: 92.30%
Epoch 81 - loss: 0.9734, acc: 92.83% / test_loss: 0.9765, test_acc: 92.53%
Epoch 82 - loss: 0.9715, acc: 92.99% / test_loss: 0.9767, test_acc: 92.53%
Epoch 83 - loss: 0.9722, acc: 92.97% / test_loss: 0.9754, test_acc: 92.61%
Epoch 84 - loss: 0.9717, acc: 92.99% / test_loss: 0.9757, test_acc: 92.57%
Epoch 85 - loss: 0.9708, acc: 93.09% / test_loss: 0.9760, test_acc: 92.57%
Epoch 86 - loss: 0.9706, acc: 93.07% / test_loss: 0.9764, test_acc: 92.48%
Epoch 87 - loss: 0.9718, acc: 93.02% / test_loss: 0.9753, test_acc: 92.67%
Epoch 88 - loss: 0.9723, acc: 92.94% / test_loss: 0.9779, test_acc: 92.38%
Epoch 89 - loss: 0.9716, acc: 93.02% / test_loss: 0.9765, test_acc: 92.48%
Epoch 90 - loss: 0.9710, acc: 93.05% / test_loss: 0.9759, test_acc: 92.56%
Epoch 91 - loss: 0.9702, acc: 93.15% / test_loss: 0.9749, test_acc: 92.65%
Epoch 92 - loss: 0.9714, acc: 93.05% / test_loss: 0.9741, test_acc: 92.67%
Epoch 93 - loss: 0.9698, acc: 93.19% / test_loss: 0.9744, test_acc: 92.73%
Epoch 94 - loss: 0.9706, acc: 93.10% / test_loss: 0.9765, test_acc: 92.51%
Epoch 95 - loss: 0.9694, acc: 93.26% / test_loss: 0.9731, test_acc: 92.92%
Epoch 96 - loss: 0.9688, acc: 93.26% / test_loss: 0.9746, test_acc: 92.70%
Epoch 97 - loss: 0.9699, acc: 93.12% / test_loss: 0.9735, test_acc: 92.86%
Epoch 98 - loss: 0.9693, acc: 93.23% / test_loss: 0.9759, test_acc: 92.62%
Epoch 99 - loss: 0.9709, acc: 93.07% / test_loss: 0.9770, test_acc: 92.53%
Epoch 100 - loss: 0.9693, acc: 93.24% / test_loss: 0.9734, test_acc: 92.80%
Epoch 101 - loss: 0.9693, acc: 93.26% / test_loss: 0.9731, test_acc: 92.84%
Epoch 102 - loss: 0.9700, acc: 93.14% / test_loss: 0.9766, test_acc: 92.47%
Epoch 103 - loss: 0.9682, acc: 93.30% / test_loss: 0.9722, test_acc: 92.92%
Epoch 104 - loss: 0.9678, acc: 93.36% / test_loss: 0.9723, test_acc: 92.96%
Epoch 105 - loss: 0.9679, acc: 93.36% / test_loss: 0.9723, test_acc: 92.93%
Epoch 106 - loss: 0.9678, acc: 93.40% / test_loss: 0.9741, test_acc: 92.77%
Epoch 107 - loss: 0.9686, acc: 93.30% / test_loss: 0.9748, test_acc: 92.65%
Epoch 108 - loss: 0.9690, acc: 93.24% / test_loss: 0.9722, test_acc: 92.92%
Epoch 109 - loss: 0.9683, acc: 93.32% / test_loss: 0.9732, test_acc: 92.83%
Epoch 110 - loss: 0.9687, acc: 93.27% / test_loss: 0.9737, test_acc: 92.71%
Epoch 111 - loss: 0.9678, acc: 93.36% / test_loss: 0.9712, test_acc: 92.97%
Epoch 112 - loss: 0.9674, acc: 93.37% / test_loss: 0.9710, test_acc: 92.99%
Epoch 113 - loss: 0.9675, acc: 93.36% / test_loss: 0.9731, test_acc: 92.85%
Epoch 114 - loss: 0.9682, acc: 93.35% / test_loss: 0.9709, test_acc: 93.05%
Epoch 115 - loss: 0.9677, acc: 93.36% / test_loss: 0.9710, test_acc: 93.03%
Epoch 116 - loss: 0.9689, acc: 93.27% / test_loss: 0.9731, test_acc: 92.85%
Epoch 117 - loss: 0.9698, acc: 93.18% / test_loss: 0.9713, test_acc: 92.98%
Epoch 118 - loss: 0.9669, acc: 93.44% / test_loss: 0.9707, test_acc: 93.08%
Epoch 119 - loss: 0.9672, acc: 93.40% / test_loss: 0.9728, test_acc: 92.86%
Epoch 120 - loss: 0.9676, acc: 93.37% / test_loss: 0.9797, test_acc: 92.22%
Epoch 121 - loss: 0.9676, acc: 93.38% / test_loss: 0.9712, test_acc: 93.02%
Epoch 122 - loss: 0.9672, acc: 93.39% / test_loss: 0.9716, test_acc: 92.98%
Epoch 123 - loss: 0.9671, acc: 93.44% / test_loss: 0.9696, test_acc: 93.18%
Epoch 124 - loss: 0.9658, acc: 93.55% / test_loss: 0.9699, test_acc: 93.11%
Epoch 125 - loss: 0.9673, acc: 93.40% / test_loss: 0.9754, test_acc: 92.55%
Epoch 126 - loss: 0.9667, acc: 93.44% / test_loss: 0.9701, test_acc: 93.12%
Epoch 127 - loss: 0.9701, acc: 93.14% / test_loss: 0.9718, test_acc: 92.94%
Epoch 128 - loss: 0.9675, acc: 93.39% / test_loss: 0.9715, test_acc: 92.96%
Epoch 129 - loss: 0.9660, acc: 93.52% / test_loss: 0.9708, test_acc: 93.02%
Epoch 130 - loss: 0.9658, acc: 93.54% / test_loss: 0.9700, test_acc: 93.14%
Epoch 131 - loss: 0.9689, acc: 93.24% / test_loss: 0.9750, test_acc: 92.63%
Epoch 132 - loss: 0.9671, acc: 93.42% / test_loss: 0.9713, test_acc: 93.00%
Epoch 133 - loss: 0.9679, acc: 93.36% / test_loss: 0.9706, test_acc: 93.05%
Epoch 134 - loss: 0.9656, acc: 93.54% / test_loss: 0.9719, test_acc: 92.92%
Epoch 135 - loss: 0.9682, acc: 93.29% / test_loss: 0.9724, test_acc: 92.87%
Epoch 136 - loss: 0.9672, acc: 93.41% / test_loss: 0.9703, test_acc: 93.11%
Epoch 137 - loss: 0.9671, acc: 93.42% / test_loss: 0.9709, test_acc: 92.99%
Epoch 138 - loss: 0.9667, acc: 93.44% / test_loss: 0.9714, test_acc: 92.98%
Epoch 139 - loss: 0.9669, acc: 93.43% / test_loss: 0.9706, test_acc: 93.02%
Epoch 140 - loss: 0.9662, acc: 93.49% / test_loss: 0.9705, test_acc: 93.09%
Epoch 141 - loss: 0.9666, acc: 93.47% / test_loss: 0.9720, test_acc: 92.93%
Epoch 142 - loss: 0.9671, acc: 93.39% / test_loss: 0.9712, test_acc: 93.02%
Epoch 143 - loss: 0.9658, acc: 93.53% / test_loss: 0.9697, test_acc: 93.15%
Epoch 144 - loss: 0.9667, acc: 93.45% / test_loss: 0.9714, test_acc: 93.01%
Epoch 145 - loss: 0.9691, acc: 93.23% / test_loss: 0.9741, test_acc: 92.74%
Epoch 146 - loss: 0.9667, acc: 93.44% / test_loss: 0.9705, test_acc: 93.07%
Epoch 147 - loss: 0.9659, acc: 93.51% / test_loss: 0.9708, test_acc: 93.08%
Epoch 148 - loss: 0.9663, acc: 93.49% / test_loss: 0.9718, test_acc: 92.93%
Epoch 149 - loss: 0.9668, acc: 93.42% / test_loss: 0.9709, test_acc: 93.01%
Epoch 150 - loss: 0.9663, acc: 93.48% / test_loss: 0.9704, test_acc: 93.08%
Epoch 151 - loss: 0.9663, acc: 93.51% / test_loss: 0.9726, test_acc: 92.85%
Epoch 152 - loss: 0.9676, acc: 93.37% / test_loss: 0.9710, test_acc: 92.99%
Epoch 153 - loss: 0.9671, acc: 93.42% / test_loss: 0.9701, test_acc: 93.11%
Epoch 154 - loss: 0.9647, acc: 93.62% / test_loss: 0.9704, test_acc: 93.05%
Epoch 155 - loss: 0.9654, acc: 93.56% / test_loss: 0.9746, test_acc: 92.66%
Epoch 156 - loss: 0.9656, acc: 93.56% / test_loss: 0.9701, test_acc: 93.11%
Epoch 157 - loss: 0.9650, acc: 93.64% / test_loss: 0.9717, test_acc: 92.90%
Epoch 158 - loss: 0.9655, acc: 93.57% / test_loss: 0.9725, test_acc: 92.83%
Epoch 159 - loss: 0.9667, acc: 93.45% / test_loss: 0.9701, test_acc: 93.07%
Epoch 160 - loss: 0.9659, acc: 93.54% / test_loss: 0.9717, test_acc: 92.99%
Epoch 161 - loss: 0.9663, acc: 93.50% / test_loss: 0.9704, test_acc: 93.11%
Epoch 162 - loss: 0.9649, acc: 93.61% / test_loss: 0.9691, test_acc: 93.20%
Epoch 163 - loss: 0.9650, acc: 93.61% / test_loss: 0.9708, test_acc: 93.04%
Epoch 164 - loss: 0.9641, acc: 93.72% / test_loss: 0.9695, test_acc: 93.16%
Epoch 165 - loss: 0.9650, acc: 93.59% / test_loss: 0.9700, test_acc: 93.11%
Epoch 166 - loss: 0.9652, acc: 93.60% / test_loss: 0.9708, test_acc: 93.01%
Epoch 167 - loss: 0.9653, acc: 93.60% / test_loss: 0.9699, test_acc: 93.09%
Epoch 168 - loss: 0.9646, acc: 93.63% / test_loss: 0.9692, test_acc: 93.20%
Epoch 169 - loss: 0.9643, acc: 93.67% / test_loss: 0.9693, test_acc: 93.24%
Epoch 170 - loss: 0.9649, acc: 93.61% / test_loss: 0.9699, test_acc: 93.16%
Epoch 171 - loss: 0.9644, acc: 93.67% / test_loss: 0.9688, test_acc: 93.22%
Epoch 172 - loss: 0.9637, acc: 93.73% / test_loss: 0.9692, test_acc: 93.17%
Epoch 173 - loss: 0.9650, acc: 93.58% / test_loss: 0.9702, test_acc: 93.08%
Epoch 174 - loss: 0.9649, acc: 93.62% / test_loss: 0.9694, test_acc: 93.14%
Epoch 175 - loss: 0.9652, acc: 93.59% / test_loss: 0.9697, test_acc: 93.13%
Epoch 176 - loss: 0.9650, acc: 93.61% / test_loss: 0.9691, test_acc: 93.19%
Epoch 177 - loss: 0.9648, acc: 93.63% / test_loss: 0.9698, test_acc: 93.13%
Epoch 178 - loss: 0.9639, acc: 93.72% / test_loss: 0.9696, test_acc: 93.15%
Epoch 179 - loss: 0.9641, acc: 93.67% / test_loss: 0.9688, test_acc: 93.19%
Epoch 180 - loss: 0.9625, acc: 93.84% / test_loss: 0.9674, test_acc: 93.42%
Epoch 181 - loss: 0.9642, acc: 93.69% / test_loss: 0.9674, test_acc: 93.41%
Epoch 182 - loss: 0.9515, acc: 95.06% / test_loss: 0.9261, test_acc: 97.90%
Epoch 183 - loss: 0.9183, acc: 98.66% / test_loss: 0.9246, test_acc: 98.09%
Epoch 184 - loss: 0.9181, acc: 98.72% / test_loss: 0.9243, test_acc: 98.07%
Epoch 185 - loss: 0.9169, acc: 98.78% / test_loss: 0.9238, test_acc: 98.13%
Epoch 186 - loss: 0.9178, acc: 98.71% / test_loss: 0.9242, test_acc: 98.08%
Epoch 187 - loss: 0.9168, acc: 98.80% / test_loss: 0.9261, test_acc: 97.91%
Epoch 188 - loss: 0.9167, acc: 98.81% / test_loss: 0.9246, test_acc: 97.98%
Epoch 189 - loss: 0.9168, acc: 98.81% / test_loss: 0.9241, test_acc: 98.12%
Epoch 190 - loss: 0.9159, acc: 98.92% / test_loss: 0.9234, test_acc: 98.14%
Epoch 191 - loss: 0.9157, acc: 98.93% / test_loss: 0.9237, test_acc: 98.11%
Epoch 192 - loss: 0.9166, acc: 98.82% / test_loss: 0.9230, test_acc: 98.19%
Epoch 193 - loss: 0.9160, acc: 98.90% / test_loss: 0.9254, test_acc: 97.96%
Epoch 194 - loss: 0.9162, acc: 98.86% / test_loss: 0.9260, test_acc: 97.88%
Epoch 195 - loss: 0.9182, acc: 98.69% / test_loss: 0.9241, test_acc: 98.10%
Epoch 196 - loss: 0.9171, acc: 98.81% / test_loss: 0.9238, test_acc: 98.11%
Epoch 197 - loss: 0.9154, acc: 98.96% / test_loss: 0.9239, test_acc: 98.06%
Epoch 198 - loss: 0.9158, acc: 98.90% / test_loss: 0.9238, test_acc: 98.10%
Epoch 199 - loss: 0.9149, acc: 98.99% / test_loss: 0.9241, test_acc: 98.07%
Epoch 200 - loss: 0.9153, acc: 98.95% / test_loss: 0.9243, test_acc: 98.07%
Epoch 201 - loss: 0.9150, acc: 98.97% / test_loss: 0.9238, test_acc: 98.08%
Epoch 202 - loss: 0.9169, acc: 98.79% / test_loss: 0.9233, test_acc: 98.17%
Epoch 203 - loss: 0.9148, acc: 99.01% / test_loss: 0.9253, test_acc: 97.95%
Epoch 204 - loss: 0.9154, acc: 98.97% / test_loss: 0.9276, test_acc: 97.75%
Epoch 205 - loss: 0.9160, acc: 98.92% / test_loss: 0.9238, test_acc: 98.09%
Epoch 206 - loss: 0.9153, acc: 98.97% / test_loss: 0.9236, test_acc: 98.12%
Epoch 207 - loss: 0.9151, acc: 98.98% / test_loss: 0.9223, test_acc: 98.27%
Epoch 208 - loss: 0.9143, acc: 99.07% / test_loss: 0.9226, test_acc: 98.23%
Epoch 209 - loss: 0.9144, acc: 99.04% / test_loss: 0.9227, test_acc: 98.19%
Epoch 210 - loss: 0.9141, acc: 99.07% / test_loss: 0.9227, test_acc: 98.21%
Epoch 211 - loss: 0.9153, acc: 98.97% / test_loss: 0.9256, test_acc: 97.94%
Epoch 212 - loss: 0.9169, acc: 98.80% / test_loss: 0.9235, test_acc: 98.14%
Epoch 213 - loss: 0.9150, acc: 99.00% / test_loss: 0.9233, test_acc: 98.15%
Epoch 214 - loss: 0.9147, acc: 99.02% / test_loss: 0.9218, test_acc: 98.31%
Epoch 215 - loss: 0.9157, acc: 98.94% / test_loss: 0.9242, test_acc: 98.07%
Epoch 216 - loss: 0.9160, acc: 98.89% / test_loss: 0.9249, test_acc: 98.01%
Epoch 217 - loss: 0.9164, acc: 98.85% / test_loss: 0.9245, test_acc: 98.02%
Epoch 218 - loss: 0.9156, acc: 98.96% / test_loss: 0.9240, test_acc: 98.06%
Epoch 219 - loss: 0.9149, acc: 98.99% / test_loss: 0.9224, test_acc: 98.23%
Epoch 220 - loss: 0.9144, acc: 99.06% / test_loss: 0.9265, test_acc: 97.83%
Epoch 221 - loss: 0.9143, acc: 99.05% / test_loss: 0.9248, test_acc: 97.98%
Epoch 222 - loss: 0.9144, acc: 99.05% / test_loss: 0.9222, test_acc: 98.23%
Epoch 223 - loss: 0.9154, acc: 98.99% / test_loss: 0.9327, test_acc: 97.21%
Epoch 224 - loss: 0.9159, acc: 98.89% / test_loss: 0.9247, test_acc: 98.01%
Epoch 225 - loss: 0.9143, acc: 99.05% / test_loss: 0.9233, test_acc: 98.14%
Epoch 226 - loss: 0.9135, acc: 99.14% / test_loss: 0.9223, test_acc: 98.23%
Epoch 227 - loss: 0.9144, acc: 99.03% / test_loss: 0.9243, test_acc: 98.04%
Epoch 228 - loss: 0.9143, acc: 99.06% / test_loss: 0.9217, test_acc: 98.32%
Epoch 229 - loss: 0.9140, acc: 99.10% / test_loss: 0.9232, test_acc: 98.17%
Epoch 230 - loss: 0.9152, acc: 98.94% / test_loss: 0.9243, test_acc: 98.10%
Epoch 231 - loss: 0.9142, acc: 99.06% / test_loss: 0.9242, test_acc: 98.01%
Epoch 232 - loss: 0.9152, acc: 98.97% / test_loss: 0.9224, test_acc: 98.29%
Epoch 233 - loss: 0.9142, acc: 99.05% / test_loss: 0.9236, test_acc: 98.13%
Epoch 234 - loss: 0.9138, acc: 99.11% / test_loss: 0.9238, test_acc: 98.09%
Epoch 235 - loss: 0.9155, acc: 98.94% / test_loss: 0.9232, test_acc: 98.12%
Epoch 236 - loss: 0.9149, acc: 98.98% / test_loss: 0.9254, test_acc: 97.91%
Epoch 237 - loss: 0.9144, acc: 99.06% / test_loss: 0.9238, test_acc: 98.12%
Epoch 238 - loss: 0.9133, acc: 99.16% / test_loss: 0.9231, test_acc: 98.18%
Epoch 239 - loss: 0.9139, acc: 99.10% / test_loss: 0.9230, test_acc: 98.17%
Epoch 240 - loss: 0.9136, acc: 99.15% / test_loss: 0.9235, test_acc: 98.10%
Epoch 241 - loss: 0.9135, acc: 99.15% / test_loss: 0.9223, test_acc: 98.24%
Epoch 242 - loss: 0.9137, acc: 99.10% / test_loss: 0.9238, test_acc: 98.11%
Epoch 243 - loss: 0.9137, acc: 99.12% / test_loss: 0.9216, test_acc: 98.33%
Epoch 244 - loss: 0.9130, acc: 99.19% / test_loss: 0.9214, test_acc: 98.35%
Epoch 245 - loss: 0.9129, acc: 99.20% / test_loss: 0.9229, test_acc: 98.20%
Epoch 246 - loss: 0.9152, acc: 98.97% / test_loss: 0.9219, test_acc: 98.30%
Epoch 247 - loss: 0.9144, acc: 99.03% / test_loss: 0.9228, test_acc: 98.18%
Epoch 248 - loss: 0.9137, acc: 99.11% / test_loss: 0.9238, test_acc: 98.13%
Epoch 249 - loss: 0.9138, acc: 99.12% / test_loss: 0.9232, test_acc: 98.14%
Epoch 250 - loss: 0.9134, acc: 99.15% / test_loss: 0.9249, test_acc: 97.99%
Epoch 251 - loss: 0.9143, acc: 99.06% / test_loss: 0.9223, test_acc: 98.24%
Epoch 252 - loss: 0.9142, acc: 99.06% / test_loss: 0.9221, test_acc: 98.28%
Epoch 253 - loss: 0.9150, acc: 99.00% / test_loss: 0.9235, test_acc: 98.12%
Epoch 254 - loss: 0.9144, acc: 99.02% / test_loss: 0.9223, test_acc: 98.26%
Epoch 255 - loss: 0.9133, acc: 99.15% / test_loss: 0.9216, test_acc: 98.33%
Epoch 256 - loss: 0.9140, acc: 99.08% / test_loss: 0.9242, test_acc: 98.08%
Epoch 257 - loss: 0.9140, acc: 99.09% / test_loss: 0.9236, test_acc: 98.13%
Epoch 258 - loss: 0.9139, acc: 99.09% / test_loss: 0.9229, test_acc: 98.20%
Epoch 259 - loss: 0.9139, acc: 99.09% / test_loss: 0.9216, test_acc: 98.34%
Epoch 260 - loss: 0.9130, acc: 99.20% / test_loss: 0.9229, test_acc: 98.19%
Epoch 261 - loss: 0.9130, acc: 99.18% / test_loss: 0.9224, test_acc: 98.24%
Epoch 262 - loss: 0.9134, acc: 99.14% / test_loss: 0.9216, test_acc: 98.33%
Epoch 263 - loss: 0.9132, acc: 99.17% / test_loss: 0.9218, test_acc: 98.28%
Epoch 264 - loss: 0.9153, acc: 98.97% / test_loss: 0.9232, test_acc: 98.14%
Epoch 265 - loss: 0.9142, acc: 99.06% / test_loss: 0.9226, test_acc: 98.18%
Epoch 266 - loss: 0.9131, acc: 99.20% / test_loss: 0.9221, test_acc: 98.24%
Epoch 267 - loss: 0.9132, acc: 99.18% / test_loss: 0.9216, test_acc: 98.32%
Epoch 268 - loss: 0.9136, acc: 99.12% / test_loss: 0.9216, test_acc: 98.32%
Epoch 269 - loss: 0.9142, acc: 99.04% / test_loss: 0.9237, test_acc: 98.11%
Epoch 270 - loss: 0.9144, acc: 99.04% / test_loss: 0.9260, test_acc: 97.85%
Epoch 271 - loss: 0.9134, acc: 99.15% / test_loss: 0.9232, test_acc: 98.15%
Epoch 272 - loss: 0.9134, acc: 99.13% / test_loss: 0.9225, test_acc: 98.19%
Epoch 273 - loss: 0.9137, acc: 99.09% / test_loss: 0.9283, test_acc: 97.61%
Epoch 274 - loss: 0.9141, acc: 99.07% / test_loss: 0.9223, test_acc: 98.26%
Epoch 275 - loss: 0.9135, acc: 99.12% / test_loss: 0.9228, test_acc: 98.17%
Epoch 276 - loss: 0.9134, acc: 99.14% / test_loss: 0.9234, test_acc: 98.16%
Epoch 277 - loss: 0.9139, acc: 99.11% / test_loss: 0.9226, test_acc: 98.24%
Epoch 278 - loss: 0.9137, acc: 99.12% / test_loss: 0.9259, test_acc: 97.91%
Epoch 279 - loss: 0.9130, acc: 99.21% / test_loss: 0.9233, test_acc: 98.15%
Epoch 280 - loss: 0.9143, acc: 99.05% / test_loss: 0.9228, test_acc: 98.19%
Epoch 281 - loss: 0.9135, acc: 99.14% / test_loss: 0.9254, test_acc: 97.97%
Epoch 282 - loss: 0.9144, acc: 99.04% / test_loss: 0.9222, test_acc: 98.28%
Epoch 283 - loss: 0.9143, acc: 99.05% / test_loss: 0.9229, test_acc: 98.16%
Epoch 284 - loss: 0.9136, acc: 99.15% / test_loss: 0.9235, test_acc: 98.13%
Epoch 285 - loss: 0.9142, acc: 99.07% / test_loss: 0.9311, test_acc: 97.36%
Epoch 286 - loss: 0.9133, acc: 99.15% / test_loss: 0.9225, test_acc: 98.21%
Epoch 287 - loss: 0.9130, acc: 99.18% / test_loss: 0.9227, test_acc: 98.21%
Epoch 288 - loss: 0.9135, acc: 99.15% / test_loss: 0.9228, test_acc: 98.17%
Epoch 289 - loss: 0.9143, acc: 99.05% / test_loss: 0.9222, test_acc: 98.26%
Epoch 290 - loss: 0.9137, acc: 99.12% / test_loss: 0.9213, test_acc: 98.32%
Epoch 291 - loss: 0.9124, acc: 99.24% / test_loss: 0.9219, test_acc: 98.28%
Epoch 292 - loss: 0.9127, acc: 99.21% / test_loss: 0.9262, test_acc: 97.83%
Epoch 293 - loss: 0.9132, acc: 99.18% / test_loss: 0.9241, test_acc: 98.07%
Epoch 294 - loss: 0.9145, acc: 99.04% / test_loss: 0.9224, test_acc: 98.20%
Epoch 295 - loss: 0.9141, acc: 99.06% / test_loss: 0.9225, test_acc: 98.21%
Epoch 296 - loss: 0.9130, acc: 99.18% / test_loss: 0.9209, test_acc: 98.41%
Epoch 297 - loss: 0.9128, acc: 99.19% / test_loss: 0.9216, test_acc: 98.32%
Epoch 298 - loss: 0.9121, acc: 99.28% / test_loss: 0.9236, test_acc: 98.09%
Epoch 299 - loss: 0.9142, acc: 99.09% / test_loss: 0.9226, test_acc: 98.22%
Epoch 300 - loss: 0.9132, acc: 99.18% / test_loss: 0.9223, test_acc: 98.26%
Epoch 301 - loss: 0.9139, acc: 99.09% / test_loss: 0.9265, test_acc: 97.80%
Epoch 302 - loss: 0.9137, acc: 99.12% / test_loss: 0.9231, test_acc: 98.17%
Epoch 303 - loss: 0.9128, acc: 99.21% / test_loss: 0.9231, test_acc: 98.15%
Epoch 304 - loss: 0.9138, acc: 99.09% / test_loss: 0.9278, test_acc: 97.72%
Epoch 305 - loss: 0.9141, acc: 99.09% / test_loss: 0.9236, test_acc: 98.12%
Epoch 306 - loss: 0.9133, acc: 99.15% / test_loss: 0.9232, test_acc: 98.17%
Epoch 307 - loss: 0.9133, acc: 99.15% / test_loss: 0.9223, test_acc: 98.24%
Epoch 308 - loss: 0.9137, acc: 99.12% / test_loss: 0.9219, test_acc: 98.29%
Epoch 309 - loss: 0.9141, acc: 99.07% / test_loss: 0.9216, test_acc: 98.32%
Epoch 310 - loss: 0.9131, acc: 99.18% / test_loss: 0.9220, test_acc: 98.26%
Epoch 311 - loss: 0.9136, acc: 99.12% / test_loss: 0.9225, test_acc: 98.26%
Epoch 312 - loss: 0.9132, acc: 99.17% / test_loss: 0.9211, test_acc: 98.38%
Epoch 313 - loss: 0.9132, acc: 99.15% / test_loss: 0.9207, test_acc: 98.40%
Epoch 314 - loss: 0.9130, acc: 99.18% / test_loss: 0.9215, test_acc: 98.31%
Epoch 315 - loss: 0.9141, acc: 99.05% / test_loss: 0.9243, test_acc: 98.05%
Epoch 316 - loss: 0.9136, acc: 99.14% / test_loss: 0.9227, test_acc: 98.17%
Epoch 317 - loss: 0.9126, acc: 99.23% / test_loss: 0.9214, test_acc: 98.31%
Epoch 318 - loss: 0.9131, acc: 99.18% / test_loss: 0.9244, test_acc: 98.03%
Epoch 319 - loss: 0.9127, acc: 99.21% / test_loss: 0.9224, test_acc: 98.20%
Epoch 320 - loss: 0.9127, acc: 99.21% / test_loss: 0.9230, test_acc: 98.20%
Epoch 321 - loss: 0.9154, acc: 98.94% / test_loss: 0.9253, test_acc: 97.94%
Epoch 322 - loss: 0.9141, acc: 99.06% / test_loss: 0.9207, test_acc: 98.42%
Epoch 323 - loss: 0.9125, acc: 99.24% / test_loss: 0.9206, test_acc: 98.39%
Epoch 324 - loss: 0.9128, acc: 99.19% / test_loss: 0.9217, test_acc: 98.31%
Epoch 325 - loss: 0.9141, acc: 99.07% / test_loss: 0.9228, test_acc: 98.18%
Epoch 326 - loss: 0.9132, acc: 99.18% / test_loss: 0.9218, test_acc: 98.31%
Epoch 327 - loss: 0.9132, acc: 99.16% / test_loss: 0.9233, test_acc: 98.13%
Epoch 328 - loss: 0.9146, acc: 99.01% / test_loss: 0.9220, test_acc: 98.29%
Epoch 329 - loss: 0.9137, acc: 99.11% / test_loss: 0.9235, test_acc: 98.13%
Epoch 330 - loss: 0.9128, acc: 99.20% / test_loss: 0.9211, test_acc: 98.34%
Epoch 331 - loss: 0.9123, acc: 99.25% / test_loss: 0.9216, test_acc: 98.31%
Epoch 332 - loss: 0.9129, acc: 99.20% / test_loss: 0.9221, test_acc: 98.26%
Epoch 333 - loss: 0.9132, acc: 99.15% / test_loss: 0.9223, test_acc: 98.23%
Epoch 334 - loss: 0.9125, acc: 99.25% / test_loss: 0.9253, test_acc: 97.95%
Epoch 335 - loss: 0.9127, acc: 99.22% / test_loss: 0.9221, test_acc: 98.25%
Epoch 336 - loss: 0.9125, acc: 99.24% / test_loss: 0.9217, test_acc: 98.30%
Epoch 337 - loss: 0.9142, acc: 99.08% / test_loss: 0.9226, test_acc: 98.20%
Epoch 338 - loss: 0.9142, acc: 99.08% / test_loss: 0.9222, test_acc: 98.26%
Epoch 339 - loss: 0.9135, acc: 99.15% / test_loss: 0.9223, test_acc: 98.23%
Epoch 340 - loss: 0.9136, acc: 99.10% / test_loss: 0.9215, test_acc: 98.34%
Epoch 341 - loss: 0.9136, acc: 99.12% / test_loss: 0.9221, test_acc: 98.26%
Epoch 342 - loss: 0.9130, acc: 99.20% / test_loss: 0.9238, test_acc: 98.07%
Epoch 343 - loss: 0.9124, acc: 99.25% / test_loss: 0.9219, test_acc: 98.28%
Epoch 344 - loss: 0.9125, acc: 99.23% / test_loss: 0.9216, test_acc: 98.34%
Epoch 345 - loss: 0.9130, acc: 99.18% / test_loss: 0.9254, test_acc: 97.88%
Epoch 346 - loss: 0.9147, acc: 99.01% / test_loss: 0.9253, test_acc: 97.91%
Epoch 347 - loss: 0.9133, acc: 99.15% / test_loss: 0.9228, test_acc: 98.18%
Epoch 348 - loss: 0.9133, acc: 99.15% / test_loss: 0.9219, test_acc: 98.27%
Epoch 349 - loss: 0.9142, acc: 99.08% / test_loss: 0.9268, test_acc: 97.77%
Epoch 350 - loss: 0.9150, acc: 98.97% / test_loss: 0.9220, test_acc: 98.28%
Epoch 351 - loss: 0.9129, acc: 99.20% / test_loss: 0.9235, test_acc: 98.12%
Epoch 352 - loss: 0.9133, acc: 99.14% / test_loss: 0.9215, test_acc: 98.31%
Epoch 353 - loss: 0.9122, acc: 99.27% / test_loss: 0.9227, test_acc: 98.17%
Epoch 354 - loss: 0.9132, acc: 99.17% / test_loss: 0.9229, test_acc: 98.20%
Epoch 355 - loss: 0.9126, acc: 99.22% / test_loss: 0.9220, test_acc: 98.27%
Epoch 356 - loss: 0.9139, acc: 99.09% / test_loss: 0.9222, test_acc: 98.25%
Epoch 357 - loss: 0.9123, acc: 99.24% / test_loss: 0.9243, test_acc: 98.04%
Epoch 358 - loss: 0.9127, acc: 99.22% / test_loss: 0.9217, test_acc: 98.31%
Epoch 359 - loss: 0.9120, acc: 99.28% / test_loss: 0.9221, test_acc: 98.26%
Epoch 360 - loss: 0.9122, acc: 99.26% / test_loss: 0.9229, test_acc: 98.17%
Epoch 361 - loss: 0.9140, acc: 99.09% / test_loss: 0.9224, test_acc: 98.21%
Epoch 362 - loss: 0.9134, acc: 99.14% / test_loss: 0.9224, test_acc: 98.23%
Epoch 363 - loss: 0.9138, acc: 99.12% / test_loss: 0.9225, test_acc: 98.22%
Epoch 364 - loss: 0.9141, acc: 99.07% / test_loss: 0.9230, test_acc: 98.14%
Epoch 365 - loss: 0.9141, acc: 99.04% / test_loss: 0.9224, test_acc: 98.24%
Epoch 366 - loss: 0.9127, acc: 99.21% / test_loss: 0.9237, test_acc: 98.13%
Epoch 367 - loss: 0.9141, acc: 99.07% / test_loss: 0.9226, test_acc: 98.23%
Epoch 368 - loss: 0.9128, acc: 99.20% / test_loss: 0.9218, test_acc: 98.29%
Epoch 369 - loss: 0.9131, acc: 99.16% / test_loss: 0.9219, test_acc: 98.29%
Epoch 370 - loss: 0.9122, acc: 99.27% / test_loss: 0.9211, test_acc: 98.37%
Epoch 371 - loss: 0.9119, acc: 99.29% / test_loss: 0.9213, test_acc: 98.36%
Epoch 372 - loss: 0.9123, acc: 99.26% / test_loss: 0.9225, test_acc: 98.28%
Epoch 373 - loss: 0.9130, acc: 99.21% / test_loss: 0.9227, test_acc: 98.20%
Epoch 374 - loss: 0.9126, acc: 99.22% / test_loss: 0.9207, test_acc: 98.42%
Epoch 375 - loss: 0.9130, acc: 99.17% / test_loss: 0.9243, test_acc: 98.07%
Epoch 376 - loss: 0.9142, acc: 99.06% / test_loss: 0.9217, test_acc: 98.31%
Epoch 377 - loss: 0.9127, acc: 99.22% / test_loss: 0.9246, test_acc: 97.98%
Epoch 378 - loss: 0.9138, acc: 99.11% / test_loss: 0.9221, test_acc: 98.25%
Epoch 379 - loss: 0.9131, acc: 99.18% / test_loss: 0.9215, test_acc: 98.32%
Epoch 380 - loss: 0.9124, acc: 99.25% / test_loss: 0.9205, test_acc: 98.41%
Epoch 381 - loss: 0.9128, acc: 99.19% / test_loss: 0.9347, test_acc: 97.02%
Epoch 382 - loss: 0.9138, acc: 99.12% / test_loss: 0.9220, test_acc: 98.29%
Epoch 383 - loss: 0.9143, acc: 99.03% / test_loss: 0.9212, test_acc: 98.35%
Epoch 384 - loss: 0.9132, acc: 99.15% / test_loss: 0.9213, test_acc: 98.36%
Epoch 385 - loss: 0.9129, acc: 99.19% / test_loss: 0.9205, test_acc: 98.41%
Epoch 386 - loss: 0.9129, acc: 99.20% / test_loss: 0.9216, test_acc: 98.32%
Epoch 387 - loss: 0.9131, acc: 99.18% / test_loss: 0.9230, test_acc: 98.19%
Epoch 388 - loss: 0.9131, acc: 99.18% / test_loss: 0.9210, test_acc: 98.38%
Epoch 389 - loss: 0.9127, acc: 99.20% / test_loss: 0.9226, test_acc: 98.19%
Epoch 390 - loss: 0.9133, acc: 99.15% / test_loss: 0.9214, test_acc: 98.33%
Epoch 391 - loss: 0.9120, acc: 99.28% / test_loss: 0.9209, test_acc: 98.36%
Epoch 392 - loss: 0.9122, acc: 99.26% / test_loss: 0.9209, test_acc: 98.37%
Epoch 393 - loss: 0.9122, acc: 99.26% / test_loss: 0.9211, test_acc: 98.38%
Epoch 394 - loss: 0.9129, acc: 99.18% / test_loss: 0.9237, test_acc: 98.14%
Epoch 395 - loss: 0.9143, acc: 99.07% / test_loss: 0.9238, test_acc: 98.10%
Epoch 396 - loss: 0.9130, acc: 99.18% / test_loss: 0.9206, test_acc: 98.42%
Epoch 397 - loss: 0.9124, acc: 99.24% / test_loss: 0.9214, test_acc: 98.35%
Epoch 398 - loss: 0.9123, acc: 99.24% / test_loss: 0.9207, test_acc: 98.41%
Epoch 399 - loss: 0.9121, acc: 99.27% / test_loss: 0.9223, test_acc: 98.26%
Epoch 400 - loss: 0.9136, acc: 99.12% / test_loss: 0.9225, test_acc: 98.22%
Best test accuracy 98.42% in epoch 322.
----------------------------------------------------------------------------------------------------
Run 9
Epoch 1 - loss: 1.3374, acc: 56.95% / test_loss: 1.1728, test_acc: 74.81%
Epoch 2 - loss: 1.1214, acc: 79.15% / test_loss: 1.0713, test_acc: 83.68%
Epoch 3 - loss: 1.0571, acc: 85.15% / test_loss: 1.0351, test_acc: 87.19%
Epoch 4 - loss: 1.0428, acc: 86.43% / test_loss: 1.0331, test_acc: 87.35%
Epoch 5 - loss: 1.0358, acc: 87.02% / test_loss: 1.0225, test_acc: 88.34%
Epoch 6 - loss: 1.0304, acc: 87.56% / test_loss: 1.0307, test_acc: 87.49%
Epoch 7 - loss: 1.0272, acc: 87.82% / test_loss: 1.0166, test_acc: 88.85%
Epoch 8 - loss: 1.0262, acc: 87.88% / test_loss: 1.0167, test_acc: 88.85%
Epoch 9 - loss: 1.0239, acc: 88.10% / test_loss: 1.0226, test_acc: 88.28%
Epoch 10 - loss: 1.0209, acc: 88.40% / test_loss: 1.0144, test_acc: 89.05%
Epoch 11 - loss: 1.0196, acc: 88.52% / test_loss: 1.0123, test_acc: 89.25%
Epoch 12 - loss: 1.0183, acc: 88.64% / test_loss: 1.0125, test_acc: 89.23%
Epoch 13 - loss: 1.0162, acc: 88.84% / test_loss: 1.0084, test_acc: 89.54%
Epoch 14 - loss: 1.0140, acc: 88.99% / test_loss: 1.0063, test_acc: 89.73%
Epoch 15 - loss: 1.0117, acc: 89.24% / test_loss: 1.0060, test_acc: 89.77%
Epoch 16 - loss: 1.0129, acc: 89.08% / test_loss: 1.0076, test_acc: 89.78%
Epoch 17 - loss: 1.0115, acc: 89.33% / test_loss: 1.0081, test_acc: 89.56%
Epoch 18 - loss: 1.0100, acc: 89.41% / test_loss: 1.0045, test_acc: 89.91%
Epoch 19 - loss: 1.0092, acc: 89.51% / test_loss: 1.0040, test_acc: 89.91%
Epoch 20 - loss: 1.0076, acc: 89.58% / test_loss: 1.0028, test_acc: 90.05%
Epoch 21 - loss: 1.0097, acc: 89.37% / test_loss: 1.0126, test_acc: 89.28%
Epoch 22 - loss: 1.0103, acc: 89.34% / test_loss: 1.0037, test_acc: 89.97%
Epoch 23 - loss: 1.0070, acc: 89.66% / test_loss: 1.0040, test_acc: 89.92%
Epoch 24 - loss: 1.0073, acc: 89.58% / test_loss: 1.0016, test_acc: 90.18%
Epoch 25 - loss: 1.0057, acc: 89.76% / test_loss: 1.0014, test_acc: 90.15%
Epoch 26 - loss: 1.0068, acc: 89.59% / test_loss: 1.0055, test_acc: 89.78%
Epoch 27 - loss: 1.0059, acc: 89.73% / test_loss: 1.0067, test_acc: 89.60%
Epoch 28 - loss: 1.0058, acc: 89.73% / test_loss: 1.0053, test_acc: 89.81%
Epoch 29 - loss: 1.0065, acc: 89.62% / test_loss: 1.0021, test_acc: 90.03%
Epoch 30 - loss: 1.0051, acc: 89.78% / test_loss: 1.0011, test_acc: 90.27%
Epoch 31 - loss: 1.0055, acc: 89.69% / test_loss: 1.0044, test_acc: 89.89%
Epoch 32 - loss: 1.0052, acc: 89.79% / test_loss: 1.0006, test_acc: 90.20%
Epoch 33 - loss: 1.0047, acc: 89.83% / test_loss: 1.0014, test_acc: 90.10%
Epoch 34 - loss: 1.0036, acc: 89.91% / test_loss: 0.9998, test_acc: 90.27%
Epoch 35 - loss: 1.0034, acc: 89.94% / test_loss: 1.0009, test_acc: 90.20%
Epoch 36 - loss: 1.0043, acc: 89.79% / test_loss: 1.0001, test_acc: 90.23%
Epoch 37 - loss: 1.0048, acc: 89.80% / test_loss: 1.0021, test_acc: 90.03%
Epoch 38 - loss: 1.0046, acc: 89.83% / test_loss: 0.9985, test_acc: 90.39%
Epoch 39 - loss: 1.0024, acc: 90.00% / test_loss: 1.0006, test_acc: 90.19%
Epoch 40 - loss: 1.0023, acc: 90.04% / test_loss: 1.0008, test_acc: 90.18%
Epoch 41 - loss: 1.0031, acc: 89.97% / test_loss: 1.0083, test_acc: 89.44%
Epoch 42 - loss: 1.0027, acc: 90.00% / test_loss: 1.0013, test_acc: 90.04%
Epoch 43 - loss: 1.0016, acc: 90.09% / test_loss: 1.0008, test_acc: 90.25%
Epoch 44 - loss: 1.0006, acc: 90.17% / test_loss: 0.9983, test_acc: 90.35%
Epoch 45 - loss: 1.0003, acc: 90.19% / test_loss: 0.9969, test_acc: 90.48%
Epoch 46 - loss: 1.0007, acc: 90.15% / test_loss: 0.9988, test_acc: 90.40%
Epoch 47 - loss: 1.0001, acc: 90.22% / test_loss: 0.9987, test_acc: 90.35%
Epoch 48 - loss: 0.9987, acc: 90.32% / test_loss: 0.9970, test_acc: 90.55%
Epoch 49 - loss: 0.9984, acc: 90.35% / test_loss: 0.9973, test_acc: 90.43%
Epoch 50 - loss: 1.0009, acc: 90.15% / test_loss: 0.9961, test_acc: 90.55%
Epoch 51 - loss: 0.9991, acc: 90.30% / test_loss: 0.9989, test_acc: 90.28%
Epoch 52 - loss: 0.9996, acc: 90.27% / test_loss: 0.9962, test_acc: 90.57%
Epoch 53 - loss: 0.9978, acc: 90.40% / test_loss: 0.9982, test_acc: 90.36%
Epoch 54 - loss: 0.9987, acc: 90.39% / test_loss: 0.9975, test_acc: 90.43%
Epoch 55 - loss: 0.9986, acc: 90.33% / test_loss: 0.9976, test_acc: 90.45%
Epoch 56 - loss: 0.9989, acc: 90.31% / test_loss: 0.9974, test_acc: 90.44%
Epoch 57 - loss: 0.9994, acc: 90.25% / test_loss: 1.0010, test_acc: 90.06%
Epoch 58 - loss: 0.9993, acc: 90.26% / test_loss: 0.9951, test_acc: 90.68%
Epoch 59 - loss: 0.9981, acc: 90.37% / test_loss: 0.9957, test_acc: 90.59%
Epoch 60 - loss: 0.9979, acc: 90.39% / test_loss: 0.9961, test_acc: 90.52%
Epoch 61 - loss: 0.9973, acc: 90.43% / test_loss: 0.9980, test_acc: 90.38%
Epoch 62 - loss: 0.9984, acc: 90.32% / test_loss: 0.9966, test_acc: 90.51%
Epoch 63 - loss: 0.9969, acc: 90.47% / test_loss: 0.9980, test_acc: 90.34%
Epoch 64 - loss: 0.9976, acc: 90.42% / test_loss: 1.0015, test_acc: 90.07%
Epoch 65 - loss: 0.9971, acc: 90.48% / test_loss: 0.9942, test_acc: 90.74%
Epoch 66 - loss: 0.9955, acc: 90.61% / test_loss: 0.9965, test_acc: 90.52%
Epoch 67 - loss: 0.9966, acc: 90.47% / test_loss: 0.9956, test_acc: 90.56%
Epoch 68 - loss: 0.9949, acc: 90.68% / test_loss: 0.9977, test_acc: 90.37%
Epoch 69 - loss: 0.9959, acc: 90.59% / test_loss: 0.9942, test_acc: 90.74%
Epoch 70 - loss: 0.9948, acc: 90.68% / test_loss: 0.9941, test_acc: 90.74%
Epoch 71 - loss: 0.9953, acc: 90.66% / test_loss: 0.9948, test_acc: 90.67%
Epoch 72 - loss: 0.9945, acc: 90.68% / test_loss: 0.9954, test_acc: 90.66%
Epoch 73 - loss: 0.9953, acc: 90.61% / test_loss: 0.9937, test_acc: 90.77%
Epoch 74 - loss: 0.9951, acc: 90.61% / test_loss: 0.9930, test_acc: 90.84%
Epoch 75 - loss: 0.9942, acc: 90.69% / test_loss: 0.9932, test_acc: 90.80%
Epoch 76 - loss: 0.9930, acc: 90.82% / test_loss: 0.9932, test_acc: 90.83%
Epoch 77 - loss: 0.9949, acc: 90.65% / test_loss: 0.9957, test_acc: 90.59%
Epoch 78 - loss: 0.9927, acc: 90.85% / test_loss: 0.9939, test_acc: 90.74%
Epoch 79 - loss: 0.9942, acc: 90.74% / test_loss: 0.9945, test_acc: 90.78%
Epoch 80 - loss: 0.9961, acc: 90.58% / test_loss: 0.9934, test_acc: 90.86%
Epoch 81 - loss: 0.9934, acc: 90.81% / test_loss: 0.9936, test_acc: 90.76%
Epoch 82 - loss: 0.9927, acc: 90.84% / test_loss: 0.9925, test_acc: 90.86%
Epoch 83 - loss: 0.9937, acc: 90.77% / test_loss: 0.9958, test_acc: 90.50%
Epoch 84 - loss: 0.9937, acc: 90.74% / test_loss: 0.9917, test_acc: 90.96%
Epoch 85 - loss: 0.9922, acc: 90.86% / test_loss: 0.9946, test_acc: 90.71%
Epoch 86 - loss: 0.9939, acc: 90.70% / test_loss: 0.9938, test_acc: 90.77%
Epoch 87 - loss: 0.9927, acc: 90.84% / test_loss: 0.9917, test_acc: 90.96%
Epoch 88 - loss: 0.9935, acc: 90.77% / test_loss: 0.9978, test_acc: 90.45%
Epoch 89 - loss: 0.9935, acc: 90.80% / test_loss: 0.9911, test_acc: 91.04%
Epoch 90 - loss: 0.9925, acc: 90.86% / test_loss: 0.9909, test_acc: 91.05%
Epoch 91 - loss: 0.9898, acc: 91.17% / test_loss: 0.9891, test_acc: 91.23%
Epoch 92 - loss: 0.9887, acc: 91.25% / test_loss: 0.9882, test_acc: 91.33%
Epoch 93 - loss: 0.9847, acc: 91.63% / test_loss: 0.9827, test_acc: 91.87%
Epoch 94 - loss: 0.9818, acc: 91.97% / test_loss: 0.9834, test_acc: 91.85%
Epoch 95 - loss: 0.9804, acc: 92.10% / test_loss: 0.9819, test_acc: 91.97%
Epoch 96 - loss: 0.9792, acc: 92.18% / test_loss: 0.9811, test_acc: 92.05%
Epoch 97 - loss: 0.9798, acc: 92.14% / test_loss: 0.9798, test_acc: 92.13%
Epoch 98 - loss: 0.9768, acc: 92.47% / test_loss: 0.9785, test_acc: 92.31%
Epoch 99 - loss: 0.9757, acc: 92.58% / test_loss: 0.9803, test_acc: 92.14%
Epoch 100 - loss: 0.9762, acc: 92.47% / test_loss: 0.9806, test_acc: 92.08%
Epoch 101 - loss: 0.9763, acc: 92.52% / test_loss: 0.9780, test_acc: 92.31%
Epoch 102 - loss: 0.9739, acc: 92.74% / test_loss: 0.9770, test_acc: 92.48%
Epoch 103 - loss: 0.9750, acc: 92.65% / test_loss: 0.9774, test_acc: 92.42%
Epoch 104 - loss: 0.9725, acc: 92.88% / test_loss: 0.9764, test_acc: 92.50%
Epoch 105 - loss: 0.9721, acc: 92.90% / test_loss: 0.9754, test_acc: 92.59%
Epoch 106 - loss: 0.9722, acc: 92.90% / test_loss: 0.9744, test_acc: 92.71%
Epoch 107 - loss: 0.9712, acc: 92.99% / test_loss: 0.9874, test_acc: 91.44%
Epoch 108 - loss: 0.9731, acc: 92.86% / test_loss: 0.9818, test_acc: 92.05%
Epoch 109 - loss: 0.9719, acc: 92.98% / test_loss: 0.9747, test_acc: 92.70%
Epoch 110 - loss: 0.9704, acc: 93.08% / test_loss: 0.9752, test_acc: 92.58%
Epoch 111 - loss: 0.9710, acc: 93.03% / test_loss: 0.9764, test_acc: 92.49%
Epoch 112 - loss: 0.9703, acc: 93.14% / test_loss: 0.9741, test_acc: 92.72%
Epoch 113 - loss: 0.9699, acc: 93.11% / test_loss: 0.9733, test_acc: 92.77%
Epoch 114 - loss: 0.9682, acc: 93.30% / test_loss: 0.9730, test_acc: 92.84%
Epoch 115 - loss: 0.9695, acc: 93.20% / test_loss: 0.9720, test_acc: 92.93%
Epoch 116 - loss: 0.9698, acc: 93.12% / test_loss: 0.9727, test_acc: 92.87%
Epoch 117 - loss: 0.9684, acc: 93.30% / test_loss: 0.9723, test_acc: 92.93%
Epoch 118 - loss: 0.9681, acc: 93.36% / test_loss: 0.9717, test_acc: 93.00%
Epoch 119 - loss: 0.9678, acc: 93.36% / test_loss: 0.9728, test_acc: 92.87%
Epoch 120 - loss: 0.9680, acc: 93.36% / test_loss: 0.9714, test_acc: 92.96%
Epoch 121 - loss: 0.9686, acc: 93.27% / test_loss: 0.9731, test_acc: 92.84%
Epoch 122 - loss: 0.9668, acc: 93.45% / test_loss: 0.9740, test_acc: 92.69%
Epoch 123 - loss: 0.9673, acc: 93.39% / test_loss: 0.9721, test_acc: 92.90%
Epoch 124 - loss: 0.9678, acc: 93.35% / test_loss: 0.9721, test_acc: 92.90%
Epoch 125 - loss: 0.9669, acc: 93.44% / test_loss: 0.9710, test_acc: 93.03%
Epoch 126 - loss: 0.9668, acc: 93.46% / test_loss: 0.9723, test_acc: 92.91%
Epoch 127 - loss: 0.9689, acc: 93.26% / test_loss: 0.9709, test_acc: 93.08%
Epoch 128 - loss: 0.9496, acc: 95.30% / test_loss: 0.9288, test_acc: 97.61%
Epoch 129 - loss: 0.9218, acc: 98.34% / test_loss: 0.9326, test_acc: 97.28%
Epoch 130 - loss: 0.9208, acc: 98.44% / test_loss: 0.9287, test_acc: 97.62%
Epoch 131 - loss: 0.9200, acc: 98.54% / test_loss: 0.9285, test_acc: 97.60%
Epoch 132 - loss: 0.9195, acc: 98.56% / test_loss: 0.9304, test_acc: 97.42%
Epoch 133 - loss: 0.9189, acc: 98.60% / test_loss: 0.9277, test_acc: 97.73%
Epoch 134 - loss: 0.9186, acc: 98.63% / test_loss: 0.9258, test_acc: 97.92%
Epoch 135 - loss: 0.9185, acc: 98.65% / test_loss: 0.9273, test_acc: 97.73%
Epoch 136 - loss: 0.9190, acc: 98.62% / test_loss: 0.9256, test_acc: 97.91%
Epoch 137 - loss: 0.9182, acc: 98.68% / test_loss: 0.9263, test_acc: 97.87%
Epoch 138 - loss: 0.9178, acc: 98.73% / test_loss: 0.9272, test_acc: 97.73%
Epoch 139 - loss: 0.9184, acc: 98.66% / test_loss: 0.9262, test_acc: 97.86%
Epoch 140 - loss: 0.9186, acc: 98.66% / test_loss: 0.9262, test_acc: 97.87%
Epoch 141 - loss: 0.9171, acc: 98.80% / test_loss: 0.9259, test_acc: 97.89%
Epoch 142 - loss: 0.9188, acc: 98.61% / test_loss: 0.9270, test_acc: 97.80%
Epoch 143 - loss: 0.9175, acc: 98.75% / test_loss: 0.9247, test_acc: 97.99%
Epoch 144 - loss: 0.9166, acc: 98.85% / test_loss: 0.9249, test_acc: 97.97%
Epoch 145 - loss: 0.9175, acc: 98.73% / test_loss: 0.9252, test_acc: 97.98%
Epoch 146 - loss: 0.9182, acc: 98.72% / test_loss: 0.9255, test_acc: 97.93%
Epoch 147 - loss: 0.9160, acc: 98.89% / test_loss: 0.9246, test_acc: 98.06%
Epoch 148 - loss: 0.9163, acc: 98.85% / test_loss: 0.9262, test_acc: 97.88%
Epoch 149 - loss: 0.9172, acc: 98.79% / test_loss: 0.9256, test_acc: 97.91%
Epoch 150 - loss: 0.9173, acc: 98.72% / test_loss: 0.9245, test_acc: 98.06%
Epoch 151 - loss: 0.9201, acc: 98.46% / test_loss: 0.9243, test_acc: 98.07%
Epoch 152 - loss: 0.9180, acc: 98.69% / test_loss: 0.9244, test_acc: 98.09%
Epoch 153 - loss: 0.9164, acc: 98.87% / test_loss: 0.9262, test_acc: 97.87%
Epoch 154 - loss: 0.9167, acc: 98.84% / test_loss: 0.9260, test_acc: 97.92%
Epoch 155 - loss: 0.9171, acc: 98.80% / test_loss: 0.9253, test_acc: 97.96%
Epoch 156 - loss: 0.9166, acc: 98.84% / test_loss: 0.9244, test_acc: 98.06%
Epoch 157 - loss: 0.9155, acc: 98.95% / test_loss: 0.9238, test_acc: 98.14%
Epoch 158 - loss: 0.9161, acc: 98.88% / test_loss: 0.9233, test_acc: 98.14%
Epoch 159 - loss: 0.9151, acc: 98.98% / test_loss: 0.9227, test_acc: 98.21%
Epoch 160 - loss: 0.9154, acc: 98.97% / test_loss: 0.9272, test_acc: 97.74%
Epoch 161 - loss: 0.9159, acc: 98.91% / test_loss: 0.9242, test_acc: 98.09%
Epoch 162 - loss: 0.9163, acc: 98.86% / test_loss: 0.9243, test_acc: 98.05%
Epoch 163 - loss: 0.9170, acc: 98.78% / test_loss: 0.9227, test_acc: 98.23%
Epoch 164 - loss: 0.9159, acc: 98.93% / test_loss: 0.9238, test_acc: 98.11%
Epoch 165 - loss: 0.9157, acc: 98.90% / test_loss: 0.9224, test_acc: 98.23%
Epoch 166 - loss: 0.9160, acc: 98.89% / test_loss: 0.9233, test_acc: 98.14%
Epoch 167 - loss: 0.9153, acc: 98.95% / test_loss: 0.9255, test_acc: 97.89%
Epoch 168 - loss: 0.9155, acc: 98.95% / test_loss: 0.9235, test_acc: 98.15%
Epoch 169 - loss: 0.9144, acc: 99.06% / test_loss: 0.9228, test_acc: 98.18%
Epoch 170 - loss: 0.9155, acc: 98.98% / test_loss: 0.9275, test_acc: 97.70%
Epoch 171 - loss: 0.9155, acc: 98.94% / test_loss: 0.9230, test_acc: 98.19%
Epoch 172 - loss: 0.9155, acc: 98.94% / test_loss: 0.9247, test_acc: 98.02%
Epoch 173 - loss: 0.9154, acc: 98.97% / test_loss: 0.9234, test_acc: 98.17%
Epoch 174 - loss: 0.9144, acc: 99.06% / test_loss: 0.9227, test_acc: 98.21%
Epoch 175 - loss: 0.9155, acc: 98.95% / test_loss: 0.9249, test_acc: 97.98%
Epoch 176 - loss: 0.9156, acc: 98.91% / test_loss: 0.9242, test_acc: 98.04%
Epoch 177 - loss: 0.9162, acc: 98.88% / test_loss: 0.9272, test_acc: 97.73%
Epoch 178 - loss: 0.9151, acc: 98.99% / test_loss: 0.9218, test_acc: 98.32%
Epoch 179 - loss: 0.9140, acc: 99.09% / test_loss: 0.9217, test_acc: 98.30%
Epoch 180 - loss: 0.9143, acc: 99.07% / test_loss: 0.9230, test_acc: 98.17%
Epoch 181 - loss: 0.9149, acc: 99.02% / test_loss: 0.9252, test_acc: 97.95%
Epoch 182 - loss: 0.9161, acc: 98.90% / test_loss: 0.9240, test_acc: 98.08%
Epoch 183 - loss: 0.9144, acc: 99.06% / test_loss: 0.9244, test_acc: 98.01%
Epoch 184 - loss: 0.9149, acc: 99.00% / test_loss: 0.9295, test_acc: 97.54%
Epoch 185 - loss: 0.9141, acc: 99.08% / test_loss: 0.9225, test_acc: 98.23%
Epoch 186 - loss: 0.9144, acc: 99.06% / test_loss: 0.9217, test_acc: 98.31%
Epoch 187 - loss: 0.9141, acc: 99.08% / test_loss: 0.9220, test_acc: 98.26%
Epoch 188 - loss: 0.9149, acc: 99.00% / test_loss: 0.9224, test_acc: 98.25%
Epoch 189 - loss: 0.9147, acc: 99.02% / test_loss: 0.9224, test_acc: 98.22%
Epoch 190 - loss: 0.9159, acc: 98.91% / test_loss: 0.9234, test_acc: 98.18%
Epoch 191 - loss: 0.9163, acc: 98.85% / test_loss: 0.9234, test_acc: 98.12%
Epoch 192 - loss: 0.9138, acc: 99.12% / test_loss: 0.9216, test_acc: 98.35%
Epoch 193 - loss: 0.9131, acc: 99.18% / test_loss: 0.9218, test_acc: 98.29%
Epoch 194 - loss: 0.9156, acc: 98.91% / test_loss: 0.9233, test_acc: 98.19%
Epoch 195 - loss: 0.9159, acc: 98.92% / test_loss: 0.9229, test_acc: 98.17%
Epoch 196 - loss: 0.9139, acc: 99.10% / test_loss: 0.9221, test_acc: 98.24%
Epoch 197 - loss: 0.9139, acc: 99.09% / test_loss: 0.9218, test_acc: 98.30%
Epoch 198 - loss: 0.9156, acc: 98.91% / test_loss: 0.9250, test_acc: 97.99%
Epoch 199 - loss: 0.9158, acc: 98.91% / test_loss: 0.9244, test_acc: 98.05%
Epoch 200 - loss: 0.9144, acc: 99.06% / test_loss: 0.9213, test_acc: 98.35%
Epoch 201 - loss: 0.9140, acc: 99.08% / test_loss: 0.9220, test_acc: 98.28%
Epoch 202 - loss: 0.9146, acc: 99.03% / test_loss: 0.9274, test_acc: 97.70%
Epoch 203 - loss: 0.9146, acc: 99.03% / test_loss: 0.9242, test_acc: 98.07%
Epoch 204 - loss: 0.9141, acc: 99.09% / test_loss: 0.9221, test_acc: 98.25%
Epoch 205 - loss: 0.9135, acc: 99.13% / test_loss: 0.9306, test_acc: 97.41%
Epoch 206 - loss: 0.9140, acc: 99.10% / test_loss: 0.9213, test_acc: 98.32%
Epoch 207 - loss: 0.9149, acc: 98.97% / test_loss: 0.9244, test_acc: 98.07%
Epoch 208 - loss: 0.9154, acc: 98.94% / test_loss: 0.9243, test_acc: 98.05%
Epoch 209 - loss: 0.9150, acc: 98.98% / test_loss: 0.9224, test_acc: 98.25%
Epoch 210 - loss: 0.9149, acc: 99.00% / test_loss: 0.9236, test_acc: 98.10%
Epoch 211 - loss: 0.9144, acc: 99.06% / test_loss: 0.9232, test_acc: 98.17%
Epoch 212 - loss: 0.9143, acc: 99.05% / test_loss: 0.9216, test_acc: 98.36%
Epoch 213 - loss: 0.9147, acc: 99.04% / test_loss: 0.9228, test_acc: 98.17%
Epoch 214 - loss: 0.9140, acc: 99.07% / test_loss: 0.9221, test_acc: 98.28%
Epoch 215 - loss: 0.9137, acc: 99.12% / test_loss: 0.9222, test_acc: 98.24%
Epoch 216 - loss: 0.9144, acc: 99.06% / test_loss: 0.9223, test_acc: 98.26%
Epoch 217 - loss: 0.9139, acc: 99.11% / test_loss: 0.9224, test_acc: 98.24%
Epoch 218 - loss: 0.9138, acc: 99.12% / test_loss: 0.9212, test_acc: 98.36%
Epoch 219 - loss: 0.9132, acc: 99.19% / test_loss: 0.9221, test_acc: 98.26%
Epoch 220 - loss: 0.9142, acc: 99.08% / test_loss: 0.9226, test_acc: 98.19%
Epoch 221 - loss: 0.9150, acc: 98.99% / test_loss: 0.9220, test_acc: 98.29%
Epoch 222 - loss: 0.9143, acc: 99.06% / test_loss: 0.9219, test_acc: 98.28%
Epoch 223 - loss: 0.9133, acc: 99.18% / test_loss: 0.9218, test_acc: 98.31%
Epoch 224 - loss: 0.9141, acc: 99.06% / test_loss: 0.9222, test_acc: 98.24%
Epoch 225 - loss: 0.9133, acc: 99.20% / test_loss: 0.9231, test_acc: 98.15%
Epoch 226 - loss: 0.9132, acc: 99.18% / test_loss: 0.9211, test_acc: 98.33%
Epoch 227 - loss: 0.9129, acc: 99.20% / test_loss: 0.9219, test_acc: 98.31%
Epoch 228 - loss: 0.9136, acc: 99.16% / test_loss: 0.9210, test_acc: 98.38%
Epoch 229 - loss: 0.9137, acc: 99.12% / test_loss: 0.9223, test_acc: 98.27%
Epoch 230 - loss: 0.9150, acc: 98.99% / test_loss: 0.9258, test_acc: 97.92%
Epoch 231 - loss: 0.9148, acc: 98.99% / test_loss: 0.9215, test_acc: 98.33%
Epoch 232 - loss: 0.9130, acc: 99.18% / test_loss: 0.9214, test_acc: 98.35%
Epoch 233 - loss: 0.9137, acc: 99.11% / test_loss: 0.9227, test_acc: 98.20%
Epoch 234 - loss: 0.9135, acc: 99.14% / test_loss: 0.9219, test_acc: 98.29%
Epoch 235 - loss: 0.9133, acc: 99.16% / test_loss: 0.9220, test_acc: 98.23%
Epoch 236 - loss: 0.9133, acc: 99.15% / test_loss: 0.9221, test_acc: 98.25%
Epoch 237 - loss: 0.9127, acc: 99.22% / test_loss: 0.9231, test_acc: 98.17%
Epoch 238 - loss: 0.9135, acc: 99.14% / test_loss: 0.9215, test_acc: 98.34%
Epoch 239 - loss: 0.9146, acc: 99.03% / test_loss: 0.9220, test_acc: 98.30%
Epoch 240 - loss: 0.9129, acc: 99.18% / test_loss: 0.9215, test_acc: 98.35%
Epoch 241 - loss: 0.9125, acc: 99.23% / test_loss: 0.9214, test_acc: 98.32%
Epoch 242 - loss: 0.9130, acc: 99.20% / test_loss: 0.9228, test_acc: 98.20%
Epoch 243 - loss: 0.9132, acc: 99.16% / test_loss: 0.9233, test_acc: 98.17%
Epoch 244 - loss: 0.9167, acc: 98.81% / test_loss: 0.9235, test_acc: 98.10%
Epoch 245 - loss: 0.9142, acc: 99.06% / test_loss: 0.9231, test_acc: 98.17%
Epoch 246 - loss: 0.9133, acc: 99.16% / test_loss: 0.9214, test_acc: 98.37%
Epoch 247 - loss: 0.9126, acc: 99.21% / test_loss: 0.9213, test_acc: 98.35%
Epoch 248 - loss: 0.9127, acc: 99.21% / test_loss: 0.9218, test_acc: 98.29%
Epoch 249 - loss: 0.9141, acc: 99.09% / test_loss: 0.9227, test_acc: 98.22%
Epoch 250 - loss: 0.9151, acc: 98.97% / test_loss: 0.9227, test_acc: 98.18%
Epoch 251 - loss: 0.9131, acc: 99.18% / test_loss: 0.9218, test_acc: 98.31%
Epoch 252 - loss: 0.9130, acc: 99.18% / test_loss: 0.9223, test_acc: 98.25%
Epoch 253 - loss: 0.9133, acc: 99.16% / test_loss: 0.9241, test_acc: 98.07%
Epoch 254 - loss: 0.9139, acc: 99.09% / test_loss: 0.9244, test_acc: 98.07%
Epoch 255 - loss: 0.9130, acc: 99.18% / test_loss: 0.9233, test_acc: 98.16%
Epoch 256 - loss: 0.9139, acc: 99.08% / test_loss: 0.9214, test_acc: 98.31%
Epoch 257 - loss: 0.9124, acc: 99.24% / test_loss: 0.9208, test_acc: 98.41%
Epoch 258 - loss: 0.9128, acc: 99.20% / test_loss: 0.9212, test_acc: 98.35%
Epoch 259 - loss: 0.9136, acc: 99.11% / test_loss: 0.9219, test_acc: 98.27%
Epoch 260 - loss: 0.9143, acc: 99.05% / test_loss: 0.9218, test_acc: 98.28%
Epoch 261 - loss: 0.9147, acc: 99.03% / test_loss: 0.9234, test_acc: 98.15%
Epoch 262 - loss: 0.9135, acc: 99.15% / test_loss: 0.9252, test_acc: 97.92%
Epoch 263 - loss: 0.9132, acc: 99.18% / test_loss: 0.9220, test_acc: 98.24%
Epoch 264 - loss: 0.9132, acc: 99.16% / test_loss: 0.9212, test_acc: 98.36%
Epoch 265 - loss: 0.9125, acc: 99.23% / test_loss: 0.9208, test_acc: 98.37%
Epoch 266 - loss: 0.9131, acc: 99.18% / test_loss: 0.9216, test_acc: 98.31%
Epoch 267 - loss: 0.9133, acc: 99.18% / test_loss: 0.9254, test_acc: 97.94%
Epoch 268 - loss: 0.9150, acc: 98.98% / test_loss: 0.9240, test_acc: 98.04%
Epoch 269 - loss: 0.9143, acc: 99.07% / test_loss: 0.9214, test_acc: 98.34%
Epoch 270 - loss: 0.9135, acc: 99.14% / test_loss: 0.9214, test_acc: 98.32%
Epoch 271 - loss: 0.9129, acc: 99.19% / test_loss: 0.9217, test_acc: 98.29%
Epoch 272 - loss: 0.9129, acc: 99.20% / test_loss: 0.9238, test_acc: 98.07%
Epoch 273 - loss: 0.9126, acc: 99.22% / test_loss: 0.9207, test_acc: 98.44%
Epoch 274 - loss: 0.9130, acc: 99.16% / test_loss: 0.9218, test_acc: 98.29%
Epoch 275 - loss: 0.9131, acc: 99.18% / test_loss: 0.9211, test_acc: 98.38%
Epoch 276 - loss: 0.9123, acc: 99.25% / test_loss: 0.9218, test_acc: 98.29%
Epoch 277 - loss: 0.9125, acc: 99.23% / test_loss: 0.9210, test_acc: 98.37%
Epoch 278 - loss: 0.9127, acc: 99.21% / test_loss: 0.9218, test_acc: 98.30%
Epoch 279 - loss: 0.9123, acc: 99.26% / test_loss: 0.9224, test_acc: 98.26%
Epoch 280 - loss: 0.9138, acc: 99.11% / test_loss: 0.9212, test_acc: 98.36%
Epoch 281 - loss: 0.9140, acc: 99.09% / test_loss: 0.9226, test_acc: 98.23%
Epoch 282 - loss: 0.9132, acc: 99.15% / test_loss: 0.9207, test_acc: 98.41%
Epoch 283 - loss: 0.9134, acc: 99.15% / test_loss: 0.9247, test_acc: 97.98%
Epoch 284 - loss: 0.9136, acc: 99.14% / test_loss: 0.9232, test_acc: 98.21%
Epoch 285 - loss: 0.9130, acc: 99.18% / test_loss: 0.9216, test_acc: 98.32%
Epoch 286 - loss: 0.9127, acc: 99.21% / test_loss: 0.9230, test_acc: 98.17%
Epoch 287 - loss: 0.9125, acc: 99.22% / test_loss: 0.9223, test_acc: 98.24%
Epoch 288 - loss: 0.9123, acc: 99.25% / test_loss: 0.9223, test_acc: 98.24%
Epoch 289 - loss: 0.9129, acc: 99.19% / test_loss: 0.9220, test_acc: 98.27%
Epoch 290 - loss: 0.9134, acc: 99.14% / test_loss: 0.9287, test_acc: 97.61%
Epoch 291 - loss: 0.9158, acc: 98.88% / test_loss: 0.9212, test_acc: 98.36%
Epoch 292 - loss: 0.9130, acc: 99.18% / test_loss: 0.9222, test_acc: 98.27%
Epoch 293 - loss: 0.9128, acc: 99.19% / test_loss: 0.9211, test_acc: 98.36%
Epoch 294 - loss: 0.9143, acc: 99.06% / test_loss: 0.9214, test_acc: 98.32%
Epoch 295 - loss: 0.9125, acc: 99.23% / test_loss: 0.9204, test_acc: 98.43%
Epoch 296 - loss: 0.9142, acc: 99.05% / test_loss: 0.9292, test_acc: 97.55%
Epoch 297 - loss: 0.9153, acc: 98.94% / test_loss: 0.9221, test_acc: 98.28%
Epoch 298 - loss: 0.9126, acc: 99.22% / test_loss: 0.9209, test_acc: 98.41%
Epoch 299 - loss: 0.9122, acc: 99.27% / test_loss: 0.9209, test_acc: 98.38%
Epoch 300 - loss: 0.9134, acc: 99.15% / test_loss: 0.9239, test_acc: 98.12%
Epoch 301 - loss: 0.9142, acc: 99.06% / test_loss: 0.9225, test_acc: 98.25%
Epoch 302 - loss: 0.9128, acc: 99.21% / test_loss: 0.9222, test_acc: 98.23%
Epoch 303 - loss: 0.9121, acc: 99.28% / test_loss: 0.9207, test_acc: 98.39%
Epoch 304 - loss: 0.9123, acc: 99.25% / test_loss: 0.9242, test_acc: 98.08%
Epoch 305 - loss: 0.9137, acc: 99.12% / test_loss: 0.9222, test_acc: 98.29%
Epoch 306 - loss: 0.9142, acc: 99.07% / test_loss: 0.9225, test_acc: 98.24%
Epoch 307 - loss: 0.9130, acc: 99.18% / test_loss: 0.9247, test_acc: 98.01%
Epoch 308 - loss: 0.9133, acc: 99.15% / test_loss: 0.9220, test_acc: 98.28%
Epoch 309 - loss: 0.9132, acc: 99.17% / test_loss: 0.9225, test_acc: 98.21%
Epoch 310 - loss: 0.9135, acc: 99.13% / test_loss: 0.9237, test_acc: 98.10%
Epoch 311 - loss: 0.9126, acc: 99.24% / test_loss: 0.9220, test_acc: 98.26%
Epoch 312 - loss: 0.9123, acc: 99.24% / test_loss: 0.9239, test_acc: 98.10%
Epoch 313 - loss: 0.9142, acc: 99.08% / test_loss: 0.9216, test_acc: 98.35%
Epoch 314 - loss: 0.9129, acc: 99.21% / test_loss: 0.9237, test_acc: 98.10%
Epoch 315 - loss: 0.9128, acc: 99.21% / test_loss: 0.9205, test_acc: 98.44%
Epoch 316 - loss: 0.9126, acc: 99.24% / test_loss: 0.9206, test_acc: 98.44%
Epoch 317 - loss: 0.9127, acc: 99.21% / test_loss: 0.9221, test_acc: 98.24%
Epoch 318 - loss: 0.9123, acc: 99.25% / test_loss: 0.9221, test_acc: 98.25%
Epoch 319 - loss: 0.9151, acc: 98.95% / test_loss: 0.9243, test_acc: 98.01%
Epoch 320 - loss: 0.9151, acc: 98.97% / test_loss: 0.9223, test_acc: 98.25%
Epoch 321 - loss: 0.9132, acc: 99.16% / test_loss: 0.9330, test_acc: 97.15%
Epoch 322 - loss: 0.9130, acc: 99.18% / test_loss: 0.9205, test_acc: 98.44%
Epoch 323 - loss: 0.9128, acc: 99.21% / test_loss: 0.9214, test_acc: 98.33%
Epoch 324 - loss: 0.9133, acc: 99.15% / test_loss: 0.9213, test_acc: 98.36%
Epoch 325 - loss: 0.9126, acc: 99.23% / test_loss: 0.9206, test_acc: 98.44%
Epoch 326 - loss: 0.9120, acc: 99.29% / test_loss: 0.9231, test_acc: 98.20%
Epoch 327 - loss: 0.9130, acc: 99.19% / test_loss: 0.9220, test_acc: 98.28%
Epoch 328 - loss: 0.9141, acc: 99.07% / test_loss: 0.9224, test_acc: 98.23%
Epoch 329 - loss: 0.9130, acc: 99.19% / test_loss: 0.9216, test_acc: 98.33%
Epoch 330 - loss: 0.9134, acc: 99.14% / test_loss: 0.9203, test_acc: 98.43%
Epoch 331 - loss: 0.9136, acc: 99.12% / test_loss: 0.9221, test_acc: 98.24%
Epoch 332 - loss: 0.9137, acc: 99.11% / test_loss: 0.9216, test_acc: 98.34%
Epoch 333 - loss: 0.9126, acc: 99.23% / test_loss: 0.9225, test_acc: 98.20%
Epoch 334 - loss: 0.9132, acc: 99.17% / test_loss: 0.9211, test_acc: 98.37%
Epoch 335 - loss: 0.9124, acc: 99.24% / test_loss: 0.9220, test_acc: 98.29%
Epoch 336 - loss: 0.9122, acc: 99.27% / test_loss: 0.9223, test_acc: 98.26%
Epoch 337 - loss: 0.9122, acc: 99.28% / test_loss: 0.9219, test_acc: 98.32%
Epoch 338 - loss: 0.9119, acc: 99.30% / test_loss: 0.9209, test_acc: 98.37%
Epoch 339 - loss: 0.9139, acc: 99.09% / test_loss: 0.9252, test_acc: 97.95%
Epoch 340 - loss: 0.9131, acc: 99.18% / test_loss: 0.9224, test_acc: 98.24%
Epoch 341 - loss: 0.9146, acc: 99.02% / test_loss: 0.9214, test_acc: 98.31%
Epoch 342 - loss: 0.9133, acc: 99.15% / test_loss: 0.9206, test_acc: 98.43%
Epoch 343 - loss: 0.9118, acc: 99.31% / test_loss: 0.9204, test_acc: 98.42%
Epoch 344 - loss: 0.9128, acc: 99.21% / test_loss: 0.9223, test_acc: 98.23%
Epoch 345 - loss: 0.9127, acc: 99.21% / test_loss: 0.9201, test_acc: 98.44%
Epoch 346 - loss: 0.9120, acc: 99.29% / test_loss: 0.9208, test_acc: 98.41%
Epoch 347 - loss: 0.9118, acc: 99.29% / test_loss: 0.9235, test_acc: 98.12%
Epoch 348 - loss: 0.9119, acc: 99.31% / test_loss: 0.9206, test_acc: 98.40%
Epoch 349 - loss: 0.9137, acc: 99.12% / test_loss: 0.9219, test_acc: 98.26%
Epoch 350 - loss: 0.9141, acc: 99.06% / test_loss: 0.9224, test_acc: 98.24%
Epoch 351 - loss: 0.9131, acc: 99.17% / test_loss: 0.9217, test_acc: 98.29%
Epoch 352 - loss: 0.9123, acc: 99.26% / test_loss: 0.9220, test_acc: 98.29%
Epoch 353 - loss: 0.9128, acc: 99.19% / test_loss: 0.9221, test_acc: 98.28%
Epoch 354 - loss: 0.9137, acc: 99.11% / test_loss: 0.9221, test_acc: 98.27%
Epoch 355 - loss: 0.9127, acc: 99.21% / test_loss: 0.9213, test_acc: 98.36%
Epoch 356 - loss: 0.9141, acc: 99.08% / test_loss: 0.9220, test_acc: 98.26%
Epoch 357 - loss: 0.9126, acc: 99.22% / test_loss: 0.9217, test_acc: 98.27%
Epoch 358 - loss: 0.9124, acc: 99.24% / test_loss: 0.9217, test_acc: 98.30%
Epoch 359 - loss: 0.9127, acc: 99.21% / test_loss: 0.9241, test_acc: 98.07%
Epoch 360 - loss: 0.9132, acc: 99.18% / test_loss: 0.9208, test_acc: 98.40%
Epoch 361 - loss: 0.9121, acc: 99.28% / test_loss: 0.9204, test_acc: 98.43%
Epoch 362 - loss: 0.9138, acc: 99.10% / test_loss: 0.9296, test_acc: 97.52%
Epoch 363 - loss: 0.9144, acc: 99.05% / test_loss: 0.9206, test_acc: 98.38%
Epoch 364 - loss: 0.9131, acc: 99.17% / test_loss: 0.9223, test_acc: 98.23%
Epoch 365 - loss: 0.9122, acc: 99.26% / test_loss: 0.9226, test_acc: 98.22%
Epoch 366 - loss: 0.9124, acc: 99.24% / test_loss: 0.9224, test_acc: 98.25%
Epoch 367 - loss: 0.9122, acc: 99.26% / test_loss: 0.9212, test_acc: 98.35%
Epoch 368 - loss: 0.9128, acc: 99.21% / test_loss: 0.9223, test_acc: 98.26%
Epoch 369 - loss: 0.9122, acc: 99.28% / test_loss: 0.9209, test_acc: 98.40%
Epoch 370 - loss: 0.9135, acc: 99.12% / test_loss: 0.9217, test_acc: 98.29%
Epoch 371 - loss: 0.9136, acc: 99.12% / test_loss: 0.9324, test_acc: 97.21%
Epoch 372 - loss: 0.9135, acc: 99.12% / test_loss: 0.9221, test_acc: 98.26%
Epoch 373 - loss: 0.9122, acc: 99.27% / test_loss: 0.9208, test_acc: 98.37%
Epoch 374 - loss: 0.9121, acc: 99.28% / test_loss: 0.9246, test_acc: 97.99%
Epoch 375 - loss: 0.9128, acc: 99.21% / test_loss: 0.9214, test_acc: 98.31%
Epoch 376 - loss: 0.9126, acc: 99.23% / test_loss: 0.9214, test_acc: 98.30%
Epoch 377 - loss: 0.9132, acc: 99.15% / test_loss: 0.9317, test_acc: 97.29%
Epoch 378 - loss: 0.9134, acc: 99.15% / test_loss: 0.9206, test_acc: 98.44%
Epoch 379 - loss: 0.9137, acc: 99.12% / test_loss: 0.9242, test_acc: 98.04%
Epoch 380 - loss: 0.9134, acc: 99.13% / test_loss: 0.9215, test_acc: 98.32%
Epoch 381 - loss: 0.9120, acc: 99.29% / test_loss: 0.9210, test_acc: 98.38%
Epoch 382 - loss: 0.9120, acc: 99.28% / test_loss: 0.9230, test_acc: 98.20%
Epoch 383 - loss: 0.9151, acc: 98.97% / test_loss: 0.9205, test_acc: 98.42%
Epoch 384 - loss: 0.9120, acc: 99.28% / test_loss: 0.9203, test_acc: 98.45%
Epoch 385 - loss: 0.9121, acc: 99.28% / test_loss: 0.9205, test_acc: 98.42%
Epoch 386 - loss: 0.9119, acc: 99.31% / test_loss: 0.9206, test_acc: 98.43%
Epoch 387 - loss: 0.9117, acc: 99.31% / test_loss: 0.9201, test_acc: 98.48%
Epoch 388 - loss: 0.9117, acc: 99.31% / test_loss: 0.9201, test_acc: 98.47%
Epoch 389 - loss: 0.9117, acc: 99.31% / test_loss: 0.9201, test_acc: 98.50%
Epoch 390 - loss: 0.9126, acc: 99.22% / test_loss: 0.9225, test_acc: 98.24%
Epoch 391 - loss: 0.9139, acc: 99.09% / test_loss: 0.9206, test_acc: 98.42%
Epoch 392 - loss: 0.9127, acc: 99.21% / test_loss: 0.9245, test_acc: 97.98%
Epoch 393 - loss: 0.9134, acc: 99.14% / test_loss: 0.9218, test_acc: 98.30%
Epoch 394 - loss: 0.9144, acc: 99.04% / test_loss: 0.9235, test_acc: 98.14%
Epoch 395 - loss: 0.9127, acc: 99.21% / test_loss: 0.9205, test_acc: 98.42%
Epoch 396 - loss: 0.9132, acc: 99.15% / test_loss: 0.9212, test_acc: 98.35%
Epoch 397 - loss: 0.9120, acc: 99.28% / test_loss: 0.9213, test_acc: 98.36%
Epoch 398 - loss: 0.9124, acc: 99.24% / test_loss: 0.9256, test_acc: 97.92%
Epoch 399 - loss: 0.9122, acc: 99.26% / test_loss: 0.9213, test_acc: 98.34%
Epoch 400 - loss: 0.9121, acc: 99.25% / test_loss: 0.9209, test_acc: 98.37%
Best test accuracy 98.50% in epoch 389.
----------------------------------------------------------------------------------------------------
Run 10
Epoch 1 - loss: 1.3538, acc: 55.46% / test_loss: 1.2185, test_acc: 69.91%
Epoch 2 - loss: 1.1334, acc: 78.07% / test_loss: 1.0875, test_acc: 82.10%
Epoch 3 - loss: 1.0934, acc: 81.38% / test_loss: 1.0822, test_acc: 82.54%
Epoch 4 - loss: 1.0672, acc: 84.07% / test_loss: 1.0360, test_acc: 87.45%
Epoch 5 - loss: 1.0413, acc: 86.66% / test_loss: 1.0255, test_acc: 88.12%
Epoch 6 - loss: 1.0356, acc: 87.13% / test_loss: 1.0216, test_acc: 88.35%
Epoch 7 - loss: 1.0274, acc: 87.90% / test_loss: 1.0190, test_acc: 88.74%
Epoch 8 - loss: 1.0255, acc: 88.00% / test_loss: 1.0168, test_acc: 88.92%
Epoch 9 - loss: 1.0236, acc: 88.15% / test_loss: 1.0211, test_acc: 88.60%
Epoch 10 - loss: 1.0217, acc: 88.34% / test_loss: 1.0129, test_acc: 89.17%
Epoch 11 - loss: 1.0205, acc: 88.46% / test_loss: 1.0176, test_acc: 88.82%
Epoch 12 - loss: 1.0178, acc: 88.66% / test_loss: 1.0116, test_acc: 89.26%
Epoch 13 - loss: 1.0160, acc: 88.82% / test_loss: 1.0076, test_acc: 89.63%
Epoch 14 - loss: 1.0156, acc: 88.87% / test_loss: 1.0111, test_acc: 89.27%
Epoch 15 - loss: 1.0143, acc: 88.95% / test_loss: 1.0117, test_acc: 89.22%
Epoch 16 - loss: 1.0137, acc: 89.03% / test_loss: 1.0095, test_acc: 89.51%
Epoch 17 - loss: 1.0116, acc: 89.22% / test_loss: 1.0060, test_acc: 89.79%
Epoch 18 - loss: 1.0098, acc: 89.39% / test_loss: 1.0057, test_acc: 89.75%
Epoch 19 - loss: 1.0108, acc: 89.25% / test_loss: 1.0024, test_acc: 90.05%
Epoch 20 - loss: 1.0085, acc: 89.49% / test_loss: 1.0048, test_acc: 89.86%
Epoch 21 - loss: 1.0070, acc: 89.63% / test_loss: 1.0020, test_acc: 90.13%
Epoch 22 - loss: 1.0076, acc: 89.54% / test_loss: 1.0043, test_acc: 89.86%
Epoch 23 - loss: 1.0076, acc: 89.59% / test_loss: 1.0021, test_acc: 90.15%
Epoch 24 - loss: 1.0066, acc: 89.66% / test_loss: 1.0027, test_acc: 90.05%
Epoch 25 - loss: 1.0057, acc: 89.73% / test_loss: 1.0039, test_acc: 89.92%
Epoch 26 - loss: 0.9962, acc: 90.82% / test_loss: 0.9638, test_acc: 94.20%
Epoch 27 - loss: 0.9660, acc: 94.02% / test_loss: 0.9626, test_acc: 94.34%
Epoch 28 - loss: 0.9631, acc: 94.28% / test_loss: 0.9627, test_acc: 94.25%
Epoch 29 - loss: 0.9636, acc: 94.18% / test_loss: 0.9642, test_acc: 94.14%
Epoch 30 - loss: 0.9608, acc: 94.49% / test_loss: 0.9584, test_acc: 94.73%
Epoch 31 - loss: 0.9621, acc: 94.31% / test_loss: 0.9577, test_acc: 94.78%
Epoch 32 - loss: 0.9611, acc: 94.42% / test_loss: 0.9609, test_acc: 94.47%
Epoch 33 - loss: 0.9596, acc: 94.56% / test_loss: 0.9582, test_acc: 94.71%
Epoch 34 - loss: 0.9598, acc: 94.58% / test_loss: 0.9591, test_acc: 94.65%
Epoch 35 - loss: 0.9602, acc: 94.51% / test_loss: 0.9583, test_acc: 94.72%
Epoch 36 - loss: 0.9595, acc: 94.59% / test_loss: 0.9569, test_acc: 94.84%
Epoch 37 - loss: 0.9592, acc: 94.59% / test_loss: 0.9571, test_acc: 94.81%
Epoch 38 - loss: 0.9588, acc: 94.63% / test_loss: 0.9567, test_acc: 94.82%
Epoch 39 - loss: 0.9598, acc: 94.53% / test_loss: 0.9581, test_acc: 94.71%
Epoch 40 - loss: 0.9589, acc: 94.59% / test_loss: 0.9598, test_acc: 94.55%
Epoch 41 - loss: 0.9589, acc: 94.64% / test_loss: 0.9556, test_acc: 94.96%
Epoch 42 - loss: 0.9580, acc: 94.74% / test_loss: 0.9542, test_acc: 95.05%
Epoch 43 - loss: 0.9572, acc: 94.79% / test_loss: 0.9557, test_acc: 94.93%
Epoch 44 - loss: 0.9564, acc: 94.87% / test_loss: 0.9589, test_acc: 94.66%
Epoch 45 - loss: 0.9575, acc: 94.73% / test_loss: 0.9594, test_acc: 94.53%
Epoch 46 - loss: 0.9556, acc: 94.93% / test_loss: 0.9543, test_acc: 95.06%
Epoch 47 - loss: 0.9568, acc: 94.81% / test_loss: 0.9538, test_acc: 95.08%
Epoch 48 - loss: 0.9558, acc: 94.92% / test_loss: 0.9536, test_acc: 95.13%
Epoch 49 - loss: 0.9555, acc: 94.97% / test_loss: 0.9544, test_acc: 95.08%
Epoch 50 - loss: 0.9544, acc: 95.09% / test_loss: 0.9597, test_acc: 94.60%
Epoch 51 - loss: 0.9567, acc: 94.83% / test_loss: 0.9525, test_acc: 95.26%
Epoch 52 - loss: 0.9533, acc: 95.18% / test_loss: 0.9529, test_acc: 95.20%
Epoch 53 - loss: 0.9555, acc: 94.93% / test_loss: 0.9571, test_acc: 94.77%
Epoch 54 - loss: 0.9544, acc: 95.10% / test_loss: 0.9551, test_acc: 95.00%
Epoch 55 - loss: 0.9534, acc: 95.15% / test_loss: 0.9525, test_acc: 95.24%
Epoch 56 - loss: 0.9528, acc: 95.20% / test_loss: 0.9522, test_acc: 95.24%
Epoch 57 - loss: 0.9541, acc: 95.10% / test_loss: 0.9524, test_acc: 95.24%
Epoch 58 - loss: 0.9536, acc: 95.14% / test_loss: 0.9529, test_acc: 95.24%
Epoch 59 - loss: 0.9528, acc: 95.24% / test_loss: 0.9544, test_acc: 95.02%
Epoch 60 - loss: 0.9542, acc: 95.06% / test_loss: 0.9545, test_acc: 94.99%
Epoch 61 - loss: 0.9529, acc: 95.21% / test_loss: 0.9518, test_acc: 95.33%
Epoch 62 - loss: 0.9519, acc: 95.30% / test_loss: 0.9524, test_acc: 95.25%
Epoch 63 - loss: 0.9536, acc: 95.12% / test_loss: 0.9491, test_acc: 95.58%
Epoch 64 - loss: 0.9491, acc: 95.60% / test_loss: 0.9494, test_acc: 95.58%
Epoch 65 - loss: 0.9497, acc: 95.55% / test_loss: 0.9526, test_acc: 95.23%
Epoch 66 - loss: 0.9495, acc: 95.57% / test_loss: 0.9505, test_acc: 95.48%
Epoch 67 - loss: 0.9479, acc: 95.73% / test_loss: 0.9487, test_acc: 95.63%
Epoch 68 - loss: 0.9473, acc: 95.77% / test_loss: 0.9482, test_acc: 95.67%
Epoch 69 - loss: 0.9468, acc: 95.82% / test_loss: 0.9494, test_acc: 95.55%
Epoch 70 - loss: 0.9491, acc: 95.62% / test_loss: 0.9492, test_acc: 95.58%
Epoch 71 - loss: 0.9485, acc: 95.63% / test_loss: 0.9491, test_acc: 95.58%
Epoch 72 - loss: 0.9482, acc: 95.67% / test_loss: 0.9564, test_acc: 94.81%
Epoch 73 - loss: 0.9472, acc: 95.75% / test_loss: 0.9506, test_acc: 95.42%
Epoch 74 - loss: 0.9463, acc: 95.86% / test_loss: 0.9514, test_acc: 95.32%
Epoch 75 - loss: 0.9458, acc: 95.92% / test_loss: 0.9467, test_acc: 95.83%
Epoch 76 - loss: 0.9461, acc: 95.88% / test_loss: 0.9475, test_acc: 95.73%
Epoch 77 - loss: 0.9470, acc: 95.80% / test_loss: 0.9526, test_acc: 95.22%
Epoch 78 - loss: 0.9478, acc: 95.70% / test_loss: 0.9491, test_acc: 95.58%
Epoch 79 - loss: 0.9469, acc: 95.77% / test_loss: 0.9492, test_acc: 95.55%
Epoch 80 - loss: 0.9461, acc: 95.86% / test_loss: 0.9485, test_acc: 95.62%
Epoch 81 - loss: 0.9472, acc: 95.79% / test_loss: 0.9470, test_acc: 95.82%
Epoch 82 - loss: 0.9469, acc: 95.85% / test_loss: 0.9471, test_acc: 95.79%
Epoch 83 - loss: 0.9454, acc: 95.95% / test_loss: 0.9464, test_acc: 95.84%
Epoch 84 - loss: 0.9464, acc: 95.87% / test_loss: 0.9468, test_acc: 95.80%
Epoch 85 - loss: 0.9453, acc: 95.96% / test_loss: 0.9507, test_acc: 95.41%
Epoch 86 - loss: 0.9464, acc: 95.83% / test_loss: 0.9486, test_acc: 95.61%
Epoch 87 - loss: 0.9467, acc: 95.82% / test_loss: 0.9480, test_acc: 95.68%
Epoch 88 - loss: 0.9459, acc: 95.89% / test_loss: 0.9470, test_acc: 95.78%
Epoch 89 - loss: 0.9460, acc: 95.89% / test_loss: 0.9480, test_acc: 95.67%
Epoch 90 - loss: 0.9464, acc: 95.85% / test_loss: 0.9492, test_acc: 95.57%
Epoch 91 - loss: 0.9461, acc: 95.88% / test_loss: 0.9467, test_acc: 95.80%
Epoch 92 - loss: 0.9451, acc: 95.97% / test_loss: 0.9471, test_acc: 95.76%
Epoch 93 - loss: 0.9457, acc: 95.92% / test_loss: 0.9487, test_acc: 95.64%
Epoch 94 - loss: 0.9473, acc: 95.77% / test_loss: 0.9484, test_acc: 95.64%
Epoch 95 - loss: 0.9453, acc: 95.97% / test_loss: 0.9470, test_acc: 95.77%
Epoch 96 - loss: 0.9466, acc: 95.82% / test_loss: 0.9494, test_acc: 95.54%
Epoch 97 - loss: 0.9461, acc: 95.86% / test_loss: 0.9475, test_acc: 95.76%
Epoch 98 - loss: 0.9456, acc: 95.92% / test_loss: 0.9481, test_acc: 95.68%
Epoch 99 - loss: 0.9464, acc: 95.83% / test_loss: 0.9474, test_acc: 95.76%
Epoch 100 - loss: 0.9459, acc: 95.89% / test_loss: 0.9469, test_acc: 95.78%
Epoch 101 - loss: 0.9453, acc: 95.95% / test_loss: 0.9483, test_acc: 95.66%
Epoch 102 - loss: 0.9450, acc: 95.99% / test_loss: 0.9468, test_acc: 95.79%
Epoch 103 - loss: 0.9444, acc: 96.05% / test_loss: 0.9476, test_acc: 95.67%
Epoch 104 - loss: 0.9466, acc: 95.81% / test_loss: 0.9475, test_acc: 95.73%
Epoch 105 - loss: 0.9461, acc: 95.88% / test_loss: 0.9478, test_acc: 95.70%
Epoch 106 - loss: 0.9450, acc: 95.99% / test_loss: 0.9466, test_acc: 95.80%
Epoch 107 - loss: 0.9453, acc: 95.93% / test_loss: 0.9473, test_acc: 95.74%
Epoch 108 - loss: 0.9446, acc: 96.02% / test_loss: 0.9473, test_acc: 95.73%
Epoch 109 - loss: 0.9488, acc: 95.60% / test_loss: 0.9468, test_acc: 95.82%
Epoch 110 - loss: 0.9476, acc: 95.73% / test_loss: 0.9489, test_acc: 95.61%
Epoch 111 - loss: 0.9462, acc: 95.85% / test_loss: 0.9461, test_acc: 95.87%
Epoch 112 - loss: 0.9449, acc: 96.00% / test_loss: 0.9466, test_acc: 95.86%
Epoch 113 - loss: 0.9471, acc: 95.78% / test_loss: 0.9527, test_acc: 95.20%
Epoch 114 - loss: 0.9456, acc: 95.92% / test_loss: 0.9480, test_acc: 95.67%
Epoch 115 - loss: 0.9452, acc: 95.97% / test_loss: 0.9484, test_acc: 95.66%
Epoch 116 - loss: 0.9452, acc: 95.95% / test_loss: 0.9456, test_acc: 95.91%
Epoch 117 - loss: 0.9442, acc: 96.06% / test_loss: 0.9466, test_acc: 95.82%
Epoch 118 - loss: 0.9452, acc: 95.98% / test_loss: 0.9462, test_acc: 95.87%
Epoch 119 - loss: 0.9446, acc: 96.02% / test_loss: 0.9461, test_acc: 95.89%
Epoch 120 - loss: 0.9444, acc: 96.04% / test_loss: 0.9463, test_acc: 95.84%
Epoch 121 - loss: 0.9439, acc: 96.10% / test_loss: 0.9463, test_acc: 95.82%
Epoch 122 - loss: 0.9451, acc: 95.98% / test_loss: 0.9463, test_acc: 95.87%
Epoch 123 - loss: 0.9457, acc: 95.90% / test_loss: 0.9474, test_acc: 95.74%
Epoch 124 - loss: 0.9446, acc: 96.03% / test_loss: 0.9469, test_acc: 95.79%
Epoch 125 - loss: 0.9465, acc: 95.82% / test_loss: 0.9458, test_acc: 95.88%
Epoch 126 - loss: 0.9441, acc: 96.08% / test_loss: 0.9458, test_acc: 95.92%
Epoch 127 - loss: 0.9441, acc: 96.07% / test_loss: 0.9459, test_acc: 95.89%
Epoch 128 - loss: 0.9463, acc: 95.86% / test_loss: 0.9540, test_acc: 95.08%
Epoch 129 - loss: 0.9464, acc: 95.85% / test_loss: 0.9472, test_acc: 95.74%
Epoch 130 - loss: 0.9461, acc: 95.87% / test_loss: 0.9458, test_acc: 95.91%
Epoch 131 - loss: 0.9448, acc: 95.99% / test_loss: 0.9492, test_acc: 95.56%
Epoch 132 - loss: 0.9434, acc: 96.11% / test_loss: 0.9438, test_acc: 96.08%
Epoch 133 - loss: 0.9427, acc: 96.23% / test_loss: 0.9444, test_acc: 96.05%
Epoch 134 - loss: 0.9417, acc: 96.31% / test_loss: 0.9428, test_acc: 96.21%
Epoch 135 - loss: 0.9402, acc: 96.46% / test_loss: 0.9405, test_acc: 96.42%
Epoch 136 - loss: 0.9370, acc: 96.77% / test_loss: 0.9408, test_acc: 96.38%
Epoch 137 - loss: 0.9366, acc: 96.82% / test_loss: 0.9386, test_acc: 96.65%
Epoch 138 - loss: 0.9343, acc: 97.06% / test_loss: 0.9376, test_acc: 96.72%
Epoch 139 - loss: 0.9329, acc: 97.19% / test_loss: 0.9366, test_acc: 96.83%
Epoch 140 - loss: 0.9326, acc: 97.21% / test_loss: 0.9364, test_acc: 96.87%
Epoch 141 - loss: 0.9312, acc: 97.37% / test_loss: 0.9392, test_acc: 96.57%
Epoch 142 - loss: 0.9298, acc: 97.51% / test_loss: 0.9346, test_acc: 97.04%
Epoch 143 - loss: 0.9309, acc: 97.38% / test_loss: 0.9339, test_acc: 97.10%
Epoch 144 - loss: 0.9295, acc: 97.52% / test_loss: 0.9346, test_acc: 97.04%
Epoch 145 - loss: 0.9295, acc: 97.52% / test_loss: 0.9330, test_acc: 97.17%
Epoch 146 - loss: 0.9266, acc: 97.83% / test_loss: 0.9345, test_acc: 97.02%
Epoch 147 - loss: 0.9265, acc: 97.86% / test_loss: 0.9324, test_acc: 97.24%
Epoch 148 - loss: 0.9290, acc: 97.58% / test_loss: 0.9328, test_acc: 97.19%
Epoch 149 - loss: 0.9267, acc: 97.81% / test_loss: 0.9331, test_acc: 97.13%
Epoch 150 - loss: 0.9266, acc: 97.82% / test_loss: 0.9309, test_acc: 97.37%
Epoch 151 - loss: 0.9265, acc: 97.86% / test_loss: 0.9319, test_acc: 97.25%
Epoch 152 - loss: 0.9260, acc: 97.87% / test_loss: 0.9321, test_acc: 97.26%
Epoch 153 - loss: 0.9276, acc: 97.70% / test_loss: 0.9314, test_acc: 97.32%
Epoch 154 - loss: 0.9261, acc: 97.89% / test_loss: 0.9314, test_acc: 97.34%
Epoch 155 - loss: 0.9255, acc: 97.91% / test_loss: 0.9309, test_acc: 97.39%
Epoch 156 - loss: 0.9247, acc: 98.00% / test_loss: 0.9322, test_acc: 97.23%
Epoch 157 - loss: 0.9253, acc: 97.96% / test_loss: 0.9318, test_acc: 97.34%
Epoch 158 - loss: 0.9265, acc: 97.82% / test_loss: 0.9306, test_acc: 97.40%
Epoch 159 - loss: 0.9243, acc: 98.06% / test_loss: 0.9305, test_acc: 97.46%
Epoch 160 - loss: 0.9247, acc: 98.03% / test_loss: 0.9303, test_acc: 97.45%
Epoch 161 - loss: 0.9239, acc: 98.09% / test_loss: 0.9303, test_acc: 97.43%
Epoch 162 - loss: 0.9251, acc: 97.96% / test_loss: 0.9301, test_acc: 97.49%
Epoch 163 - loss: 0.9253, acc: 97.96% / test_loss: 0.9321, test_acc: 97.25%
Epoch 164 - loss: 0.9247, acc: 98.03% / test_loss: 0.9309, test_acc: 97.39%
Epoch 165 - loss: 0.9241, acc: 98.07% / test_loss: 0.9296, test_acc: 97.51%
Epoch 166 - loss: 0.9239, acc: 98.09% / test_loss: 0.9307, test_acc: 97.41%
Epoch 167 - loss: 0.9247, acc: 98.02% / test_loss: 0.9311, test_acc: 97.37%
Epoch 168 - loss: 0.9235, acc: 98.14% / test_loss: 0.9295, test_acc: 97.53%
Epoch 169 - loss: 0.9245, acc: 98.03% / test_loss: 0.9314, test_acc: 97.31%
Epoch 170 - loss: 0.9252, acc: 97.96% / test_loss: 0.9324, test_acc: 97.27%
Epoch 171 - loss: 0.9233, acc: 98.15% / test_loss: 0.9303, test_acc: 97.46%
Epoch 172 - loss: 0.9230, acc: 98.17% / test_loss: 0.9303, test_acc: 97.46%
Epoch 173 - loss: 0.9224, acc: 98.23% / test_loss: 0.9296, test_acc: 97.51%
Epoch 174 - loss: 0.9228, acc: 98.18% / test_loss: 0.9314, test_acc: 97.30%
Epoch 175 - loss: 0.9236, acc: 98.10% / test_loss: 0.9302, test_acc: 97.44%
Epoch 176 - loss: 0.9227, acc: 98.23% / test_loss: 0.9334, test_acc: 97.12%
Epoch 177 - loss: 0.9224, acc: 98.25% / test_loss: 0.9311, test_acc: 97.35%
Epoch 178 - loss: 0.9228, acc: 98.20% / test_loss: 0.9296, test_acc: 97.50%
Epoch 179 - loss: 0.9217, acc: 98.32% / test_loss: 0.9319, test_acc: 97.27%
Epoch 180 - loss: 0.9228, acc: 98.20% / test_loss: 0.9303, test_acc: 97.49%
Epoch 181 - loss: 0.9234, acc: 98.14% / test_loss: 0.9321, test_acc: 97.27%
Epoch 182 - loss: 0.9224, acc: 98.26% / test_loss: 0.9279, test_acc: 97.69%
Epoch 183 - loss: 0.9215, acc: 98.33% / test_loss: 0.9311, test_acc: 97.37%
Epoch 184 - loss: 0.9213, acc: 98.35% / test_loss: 0.9284, test_acc: 97.64%
Epoch 185 - loss: 0.9202, acc: 98.46% / test_loss: 0.9279, test_acc: 97.67%
Epoch 186 - loss: 0.9206, acc: 98.41% / test_loss: 0.9340, test_acc: 97.06%
Epoch 187 - loss: 0.9213, acc: 98.36% / test_loss: 0.9264, test_acc: 97.85%
Epoch 188 - loss: 0.9204, acc: 98.46% / test_loss: 0.9269, test_acc: 97.79%
Epoch 189 - loss: 0.9208, acc: 98.44% / test_loss: 0.9263, test_acc: 97.86%
Epoch 190 - loss: 0.9193, acc: 98.58% / test_loss: 0.9259, test_acc: 97.94%
Epoch 191 - loss: 0.9199, acc: 98.50% / test_loss: 0.9297, test_acc: 97.52%
Epoch 192 - loss: 0.9196, acc: 98.54% / test_loss: 0.9260, test_acc: 97.88%
Epoch 193 - loss: 0.9186, acc: 98.63% / test_loss: 0.9251, test_acc: 97.95%
Epoch 194 - loss: 0.9191, acc: 98.55% / test_loss: 0.9259, test_acc: 97.90%
Epoch 195 - loss: 0.9194, acc: 98.54% / test_loss: 0.9273, test_acc: 97.74%
Epoch 196 - loss: 0.9189, acc: 98.61% / test_loss: 0.9266, test_acc: 97.77%
Epoch 197 - loss: 0.9189, acc: 98.59% / test_loss: 0.9270, test_acc: 97.81%
Epoch 198 - loss: 0.9188, acc: 98.61% / test_loss: 0.9240, test_acc: 98.08%
Epoch 199 - loss: 0.9183, acc: 98.65% / test_loss: 0.9239, test_acc: 98.11%
Epoch 200 - loss: 0.9175, acc: 98.75% / test_loss: 0.9244, test_acc: 98.01%
Epoch 201 - loss: 0.9202, acc: 98.44% / test_loss: 0.9242, test_acc: 98.06%
Epoch 202 - loss: 0.9176, acc: 98.75% / test_loss: 0.9251, test_acc: 97.96%
Epoch 203 - loss: 0.9173, acc: 98.75% / test_loss: 0.9256, test_acc: 97.94%
Epoch 204 - loss: 0.9170, acc: 98.78% / test_loss: 0.9227, test_acc: 98.17%
Epoch 205 - loss: 0.9170, acc: 98.78% / test_loss: 0.9232, test_acc: 98.14%
Epoch 206 - loss: 0.9173, acc: 98.78% / test_loss: 0.9237, test_acc: 98.09%
Epoch 207 - loss: 0.9184, acc: 98.66% / test_loss: 0.9234, test_acc: 98.14%
Epoch 208 - loss: 0.9195, acc: 98.55% / test_loss: 0.9238, test_acc: 98.08%
Epoch 209 - loss: 0.9183, acc: 98.65% / test_loss: 0.9285, test_acc: 97.62%
Epoch 210 - loss: 0.9175, acc: 98.78% / test_loss: 0.9253, test_acc: 97.96%
Epoch 211 - loss: 0.9158, acc: 98.92% / test_loss: 0.9252, test_acc: 97.95%
Epoch 212 - loss: 0.9166, acc: 98.82% / test_loss: 0.9251, test_acc: 97.98%
Epoch 213 - loss: 0.9173, acc: 98.76% / test_loss: 0.9238, test_acc: 98.07%
Epoch 214 - loss: 0.9180, acc: 98.65% / test_loss: 0.9226, test_acc: 98.24%
Epoch 215 - loss: 0.9167, acc: 98.82% / test_loss: 0.9258, test_acc: 97.89%
Epoch 216 - loss: 0.9178, acc: 98.72% / test_loss: 0.9242, test_acc: 98.04%
Epoch 217 - loss: 0.9161, acc: 98.88% / test_loss: 0.9241, test_acc: 98.07%
Epoch 218 - loss: 0.9153, acc: 98.96% / test_loss: 0.9225, test_acc: 98.28%
Epoch 219 - loss: 0.9162, acc: 98.85% / test_loss: 0.9223, test_acc: 98.26%
Epoch 220 - loss: 0.9171, acc: 98.80% / test_loss: 0.9239, test_acc: 98.09%
Epoch 221 - loss: 0.9169, acc: 98.81% / test_loss: 0.9243, test_acc: 98.05%
Epoch 222 - loss: 0.9164, acc: 98.85% / test_loss: 0.9225, test_acc: 98.21%
Epoch 223 - loss: 0.9156, acc: 98.91% / test_loss: 0.9232, test_acc: 98.10%
Epoch 224 - loss: 0.9152, acc: 98.97% / test_loss: 0.9243, test_acc: 98.07%
Epoch 225 - loss: 0.9154, acc: 98.95% / test_loss: 0.9226, test_acc: 98.21%
Epoch 226 - loss: 0.9158, acc: 98.91% / test_loss: 0.9276, test_acc: 97.71%
Epoch 227 - loss: 0.9196, acc: 98.53% / test_loss: 0.9239, test_acc: 98.14%
Epoch 228 - loss: 0.9184, acc: 98.66% / test_loss: 0.9245, test_acc: 98.05%
Epoch 229 - loss: 0.9171, acc: 98.81% / test_loss: 0.9224, test_acc: 98.28%
Epoch 230 - loss: 0.9153, acc: 98.97% / test_loss: 0.9228, test_acc: 98.19%
Epoch 231 - loss: 0.9149, acc: 99.01% / test_loss: 0.9227, test_acc: 98.16%
Epoch 232 - loss: 0.9156, acc: 98.94% / test_loss: 0.9254, test_acc: 97.94%
Epoch 233 - loss: 0.9155, acc: 98.96% / test_loss: 0.9224, test_acc: 98.23%
Epoch 234 - loss: 0.9175, acc: 98.75% / test_loss: 0.9245, test_acc: 98.05%
Epoch 235 - loss: 0.9159, acc: 98.91% / test_loss: 0.9217, test_acc: 98.30%
Epoch 236 - loss: 0.9150, acc: 98.96% / test_loss: 0.9238, test_acc: 98.10%
Epoch 237 - loss: 0.9158, acc: 98.91% / test_loss: 0.9217, test_acc: 98.35%
Epoch 238 - loss: 0.9153, acc: 98.98% / test_loss: 0.9245, test_acc: 98.04%
Epoch 239 - loss: 0.9147, acc: 99.03% / test_loss: 0.9215, test_acc: 98.32%
Epoch 240 - loss: 0.9153, acc: 98.93% / test_loss: 0.9226, test_acc: 98.20%
Epoch 241 - loss: 0.9152, acc: 98.98% / test_loss: 0.9250, test_acc: 97.98%
Epoch 242 - loss: 0.9161, acc: 98.85% / test_loss: 0.9235, test_acc: 98.14%
Epoch 243 - loss: 0.9155, acc: 98.94% / test_loss: 0.9223, test_acc: 98.23%
Epoch 244 - loss: 0.9156, acc: 98.92% / test_loss: 0.9221, test_acc: 98.26%
Epoch 245 - loss: 0.9150, acc: 99.00% / test_loss: 0.9221, test_acc: 98.32%
Epoch 246 - loss: 0.9159, acc: 98.91% / test_loss: 0.9209, test_acc: 98.39%
Epoch 247 - loss: 0.9144, acc: 99.05% / test_loss: 0.9214, test_acc: 98.36%
Epoch 248 - loss: 0.9154, acc: 98.93% / test_loss: 0.9226, test_acc: 98.23%
Epoch 249 - loss: 0.9151, acc: 98.98% / test_loss: 0.9216, test_acc: 98.32%
Epoch 250 - loss: 0.9147, acc: 99.01% / test_loss: 0.9226, test_acc: 98.25%
Epoch 251 - loss: 0.9149, acc: 98.97% / test_loss: 0.9261, test_acc: 97.88%
Epoch 252 - loss: 0.9151, acc: 98.97% / test_loss: 0.9230, test_acc: 98.17%
Epoch 253 - loss: 0.9162, acc: 98.88% / test_loss: 0.9220, test_acc: 98.25%
Epoch 254 - loss: 0.9173, acc: 98.75% / test_loss: 0.9228, test_acc: 98.23%
Epoch 255 - loss: 0.9155, acc: 98.94% / test_loss: 0.9217, test_acc: 98.32%
Epoch 256 - loss: 0.9153, acc: 98.97% / test_loss: 0.9225, test_acc: 98.25%
Epoch 257 - loss: 0.9154, acc: 98.96% / test_loss: 0.9230, test_acc: 98.17%
Epoch 258 - loss: 0.9145, acc: 99.03% / test_loss: 0.9239, test_acc: 98.09%
Epoch 259 - loss: 0.9147, acc: 99.01% / test_loss: 0.9254, test_acc: 97.97%
Epoch 260 - loss: 0.9157, acc: 98.91% / test_loss: 0.9229, test_acc: 98.20%
Epoch 261 - loss: 0.9164, acc: 98.84% / test_loss: 0.9222, test_acc: 98.29%
Epoch 262 - loss: 0.9142, acc: 99.05% / test_loss: 0.9217, test_acc: 98.30%
Epoch 263 - loss: 0.9145, acc: 99.04% / test_loss: 0.9232, test_acc: 98.15%
Epoch 264 - loss: 0.9151, acc: 98.97% / test_loss: 0.9231, test_acc: 98.16%
Epoch 265 - loss: 0.9144, acc: 99.06% / test_loss: 0.9214, test_acc: 98.35%
Epoch 266 - loss: 0.9121, acc: 99.31% / test_loss: 0.9210, test_acc: 98.42%
Epoch 267 - loss: 0.9112, acc: 99.38% / test_loss: 0.9179, test_acc: 98.70%
Epoch 268 - loss: 0.9120, acc: 99.28% / test_loss: 0.9184, test_acc: 98.63%
Epoch 269 - loss: 0.9111, acc: 99.38% / test_loss: 0.9184, test_acc: 98.65%
Epoch 270 - loss: 0.9105, acc: 99.45% / test_loss: 0.9176, test_acc: 98.69%
Epoch 271 - loss: 0.9117, acc: 99.32% / test_loss: 0.9238, test_acc: 98.10%
Epoch 272 - loss: 0.9122, acc: 99.27% / test_loss: 0.9194, test_acc: 98.57%
Epoch 273 - loss: 0.9123, acc: 99.25% / test_loss: 0.9173, test_acc: 98.77%
Epoch 274 - loss: 0.9115, acc: 99.34% / test_loss: 0.9185, test_acc: 98.62%
Epoch 275 - loss: 0.9115, acc: 99.34% / test_loss: 0.9202, test_acc: 98.47%
Epoch 276 - loss: 0.9108, acc: 99.43% / test_loss: 0.9182, test_acc: 98.66%
Epoch 277 - loss: 0.9110, acc: 99.38% / test_loss: 0.9178, test_acc: 98.71%
Epoch 278 - loss: 0.9109, acc: 99.41% / test_loss: 0.9188, test_acc: 98.62%
Epoch 279 - loss: 0.9117, acc: 99.33% / test_loss: 0.9187, test_acc: 98.59%
Epoch 280 - loss: 0.9112, acc: 99.38% / test_loss: 0.9178, test_acc: 98.72%
Epoch 281 - loss: 0.9134, acc: 99.17% / test_loss: 0.9195, test_acc: 98.51%
Epoch 282 - loss: 0.9112, acc: 99.37% / test_loss: 0.9193, test_acc: 98.55%
Epoch 283 - loss: 0.9114, acc: 99.37% / test_loss: 0.9193, test_acc: 98.56%
Epoch 284 - loss: 0.9113, acc: 99.35% / test_loss: 0.9181, test_acc: 98.68%
Epoch 285 - loss: 0.9109, acc: 99.40% / test_loss: 0.9175, test_acc: 98.73%
Epoch 286 - loss: 0.9104, acc: 99.45% / test_loss: 0.9178, test_acc: 98.67%
Epoch 287 - loss: 0.9114, acc: 99.34% / test_loss: 0.9201, test_acc: 98.46%
Epoch 288 - loss: 0.9128, acc: 99.19% / test_loss: 0.9169, test_acc: 98.80%
Epoch 289 - loss: 0.9110, acc: 99.39% / test_loss: 0.9185, test_acc: 98.62%
Epoch 290 - loss: 0.9104, acc: 99.45% / test_loss: 0.9183, test_acc: 98.66%
Epoch 291 - loss: 0.9106, acc: 99.42% / test_loss: 0.9183, test_acc: 98.63%
Epoch 292 - loss: 0.9117, acc: 99.31% / test_loss: 0.9190, test_acc: 98.57%
Epoch 293 - loss: 0.9116, acc: 99.32% / test_loss: 0.9233, test_acc: 98.15%
Epoch 294 - loss: 0.9106, acc: 99.43% / test_loss: 0.9177, test_acc: 98.70%
Epoch 295 - loss: 0.9112, acc: 99.37% / test_loss: 0.9194, test_acc: 98.54%
Epoch 296 - loss: 0.9112, acc: 99.36% / test_loss: 0.9171, test_acc: 98.76%
Epoch 297 - loss: 0.9100, acc: 99.49% / test_loss: 0.9168, test_acc: 98.81%
Epoch 298 - loss: 0.9105, acc: 99.45% / test_loss: 0.9177, test_acc: 98.72%
Epoch 299 - loss: 0.9117, acc: 99.32% / test_loss: 0.9199, test_acc: 98.48%
Epoch 300 - loss: 0.9109, acc: 99.40% / test_loss: 0.9168, test_acc: 98.80%
Epoch 301 - loss: 0.9112, acc: 99.39% / test_loss: 0.9218, test_acc: 98.32%
Epoch 302 - loss: 0.9111, acc: 99.37% / test_loss: 0.9182, test_acc: 98.66%
Epoch 303 - loss: 0.9115, acc: 99.33% / test_loss: 0.9224, test_acc: 98.27%
Epoch 304 - loss: 0.9108, acc: 99.41% / test_loss: 0.9206, test_acc: 98.43%
Epoch 305 - loss: 0.9116, acc: 99.32% / test_loss: 0.9189, test_acc: 98.60%
Epoch 306 - loss: 0.9117, acc: 99.30% / test_loss: 0.9171, test_acc: 98.78%
Epoch 307 - loss: 0.9101, acc: 99.49% / test_loss: 0.9180, test_acc: 98.68%
Epoch 308 - loss: 0.9110, acc: 99.37% / test_loss: 0.9171, test_acc: 98.78%
Epoch 309 - loss: 0.9108, acc: 99.39% / test_loss: 0.9236, test_acc: 98.13%
Epoch 310 - loss: 0.9129, acc: 99.16% / test_loss: 0.9182, test_acc: 98.63%
Epoch 311 - loss: 0.9102, acc: 99.46% / test_loss: 0.9191, test_acc: 98.60%
Epoch 312 - loss: 0.9105, acc: 99.43% / test_loss: 0.9188, test_acc: 98.60%
Epoch 313 - loss: 0.9110, acc: 99.41% / test_loss: 0.9188, test_acc: 98.61%
Epoch 314 - loss: 0.9108, acc: 99.41% / test_loss: 0.9173, test_acc: 98.73%
Epoch 315 - loss: 0.9113, acc: 99.34% / test_loss: 0.9182, test_acc: 98.66%
Epoch 316 - loss: 0.9135, acc: 99.12% / test_loss: 0.9192, test_acc: 98.55%
Epoch 317 - loss: 0.9112, acc: 99.37% / test_loss: 0.9190, test_acc: 98.60%
Epoch 318 - loss: 0.9110, acc: 99.39% / test_loss: 0.9205, test_acc: 98.44%
Epoch 319 - loss: 0.9105, acc: 99.43% / test_loss: 0.9169, test_acc: 98.76%
Epoch 320 - loss: 0.9114, acc: 99.35% / test_loss: 0.9180, test_acc: 98.65%
Epoch 321 - loss: 0.9113, acc: 99.36% / test_loss: 0.9178, test_acc: 98.69%
Epoch 322 - loss: 0.9104, acc: 99.44% / test_loss: 0.9184, test_acc: 98.63%
Epoch 323 - loss: 0.9104, acc: 99.44% / test_loss: 0.9226, test_acc: 98.24%
Epoch 324 - loss: 0.9105, acc: 99.43% / test_loss: 0.9178, test_acc: 98.66%
Epoch 325 - loss: 0.9102, acc: 99.47% / test_loss: 0.9187, test_acc: 98.63%
Epoch 326 - loss: 0.9115, acc: 99.34% / test_loss: 0.9188, test_acc: 98.61%
Epoch 327 - loss: 0.9111, acc: 99.38% / test_loss: 0.9188, test_acc: 98.63%
Epoch 328 - loss: 0.9103, acc: 99.45% / test_loss: 0.9204, test_acc: 98.41%
Epoch 329 - loss: 0.9113, acc: 99.35% / test_loss: 0.9194, test_acc: 98.56%
Epoch 330 - loss: 0.9108, acc: 99.40% / test_loss: 0.9205, test_acc: 98.44%
Epoch 331 - loss: 0.9128, acc: 99.20% / test_loss: 0.9180, test_acc: 98.64%
Epoch 332 - loss: 0.9116, acc: 99.31% / test_loss: 0.9179, test_acc: 98.72%
Epoch 333 - loss: 0.9108, acc: 99.41% / test_loss: 0.9177, test_acc: 98.68%
Epoch 334 - loss: 0.9107, acc: 99.42% / test_loss: 0.9176, test_acc: 98.72%
Epoch 335 - loss: 0.9106, acc: 99.43% / test_loss: 0.9160, test_acc: 98.88%
Epoch 336 - loss: 0.9110, acc: 99.39% / test_loss: 0.9181, test_acc: 98.66%
Epoch 337 - loss: 0.9116, acc: 99.31% / test_loss: 0.9174, test_acc: 98.74%
Epoch 338 - loss: 0.9123, acc: 99.25% / test_loss: 0.9255, test_acc: 97.92%
Epoch 339 - loss: 0.9104, acc: 99.46% / test_loss: 0.9177, test_acc: 98.75%
Epoch 340 - loss: 0.9107, acc: 99.41% / test_loss: 0.9178, test_acc: 98.69%
Epoch 341 - loss: 0.9100, acc: 99.48% / test_loss: 0.9164, test_acc: 98.83%
Epoch 342 - loss: 0.9105, acc: 99.42% / test_loss: 0.9192, test_acc: 98.54%
Epoch 343 - loss: 0.9110, acc: 99.38% / test_loss: 0.9201, test_acc: 98.45%
Epoch 344 - loss: 0.9121, acc: 99.27% / test_loss: 0.9161, test_acc: 98.88%
Epoch 345 - loss: 0.9109, acc: 99.39% / test_loss: 0.9176, test_acc: 98.69%
Epoch 346 - loss: 0.9102, acc: 99.48% / test_loss: 0.9170, test_acc: 98.78%
Epoch 347 - loss: 0.9105, acc: 99.43% / test_loss: 0.9179, test_acc: 98.64%
Epoch 348 - loss: 0.9102, acc: 99.47% / test_loss: 0.9280, test_acc: 97.67%
Epoch 349 - loss: 0.9106, acc: 99.44% / test_loss: 0.9169, test_acc: 98.78%
Epoch 350 - loss: 0.9109, acc: 99.40% / test_loss: 0.9206, test_acc: 98.42%
Epoch 351 - loss: 0.9125, acc: 99.21% / test_loss: 0.9183, test_acc: 98.63%
Epoch 352 - loss: 0.9102, acc: 99.46% / test_loss: 0.9174, test_acc: 98.73%
Epoch 353 - loss: 0.9097, acc: 99.52% / test_loss: 0.9180, test_acc: 98.68%
Epoch 354 - loss: 0.9097, acc: 99.52% / test_loss: 0.9166, test_acc: 98.80%
Epoch 355 - loss: 0.9094, acc: 99.54% / test_loss: 0.9169, test_acc: 98.80%
Epoch 356 - loss: 0.9094, acc: 99.55% / test_loss: 0.9167, test_acc: 98.81%
Epoch 357 - loss: 0.9094, acc: 99.55% / test_loss: 0.9167, test_acc: 98.81%
Epoch 358 - loss: 0.9118, acc: 99.30% / test_loss: 0.9208, test_acc: 98.41%
Epoch 359 - loss: 0.9112, acc: 99.36% / test_loss: 0.9207, test_acc: 98.40%
Epoch 360 - loss: 0.9109, acc: 99.40% / test_loss: 0.9179, test_acc: 98.69%
Epoch 361 - loss: 0.9105, acc: 99.43% / test_loss: 0.9168, test_acc: 98.78%
Epoch 362 - loss: 0.9098, acc: 99.52% / test_loss: 0.9174, test_acc: 98.72%
Epoch 363 - loss: 0.9098, acc: 99.50% / test_loss: 0.9200, test_acc: 98.47%
Epoch 364 - loss: 0.9097, acc: 99.51% / test_loss: 0.9162, test_acc: 98.86%
Epoch 365 - loss: 0.9094, acc: 99.55% / test_loss: 0.9185, test_acc: 98.61%
Epoch 366 - loss: 0.9096, acc: 99.53% / test_loss: 0.9188, test_acc: 98.60%
Epoch 367 - loss: 0.9109, acc: 99.40% / test_loss: 0.9180, test_acc: 98.68%
Epoch 368 - loss: 0.9125, acc: 99.22% / test_loss: 0.9182, test_acc: 98.69%
Epoch 369 - loss: 0.9115, acc: 99.35% / test_loss: 0.9171, test_acc: 98.76%
Epoch 370 - loss: 0.9107, acc: 99.42% / test_loss: 0.9181, test_acc: 98.68%
Epoch 371 - loss: 0.9104, acc: 99.44% / test_loss: 0.9186, test_acc: 98.60%
Epoch 372 - loss: 0.9107, acc: 99.42% / test_loss: 0.9178, test_acc: 98.69%
Epoch 373 - loss: 0.9108, acc: 99.39% / test_loss: 0.9181, test_acc: 98.66%
Epoch 374 - loss: 0.9101, acc: 99.48% / test_loss: 0.9176, test_acc: 98.72%
Epoch 375 - loss: 0.9101, acc: 99.48% / test_loss: 0.9170, test_acc: 98.78%
Epoch 376 - loss: 0.9096, acc: 99.52% / test_loss: 0.9169, test_acc: 98.80%
Epoch 377 - loss: 0.9130, acc: 99.20% / test_loss: 0.9189, test_acc: 98.55%
Epoch 378 - loss: 0.9109, acc: 99.40% / test_loss: 0.9213, test_acc: 98.35%
Epoch 379 - loss: 0.9106, acc: 99.42% / test_loss: 0.9174, test_acc: 98.76%
Epoch 380 - loss: 0.9121, acc: 99.28% / test_loss: 0.9179, test_acc: 98.72%
Epoch 381 - loss: 0.9103, acc: 99.46% / test_loss: 0.9173, test_acc: 98.74%
Epoch 382 - loss: 0.9099, acc: 99.49% / test_loss: 0.9181, test_acc: 98.67%
Epoch 383 - loss: 0.9107, acc: 99.42% / test_loss: 0.9174, test_acc: 98.74%
Epoch 384 - loss: 0.9112, acc: 99.37% / test_loss: 0.9175, test_acc: 98.73%
Epoch 385 - loss: 0.9098, acc: 99.49% / test_loss: 0.9169, test_acc: 98.78%
Epoch 386 - loss: 0.9103, acc: 99.46% / test_loss: 0.9172, test_acc: 98.75%
Epoch 387 - loss: 0.9121, acc: 99.27% / test_loss: 0.9180, test_acc: 98.68%
Epoch 388 - loss: 0.9109, acc: 99.40% / test_loss: 0.9204, test_acc: 98.42%
Epoch 389 - loss: 0.9102, acc: 99.49% / test_loss: 0.9172, test_acc: 98.76%
Epoch 390 - loss: 0.9098, acc: 99.51% / test_loss: 0.9178, test_acc: 98.71%
Epoch 391 - loss: 0.9107, acc: 99.41% / test_loss: 0.9197, test_acc: 98.51%
Epoch 392 - loss: 0.9098, acc: 99.50% / test_loss: 0.9175, test_acc: 98.75%
Epoch 393 - loss: 0.9101, acc: 99.46% / test_loss: 0.9178, test_acc: 98.67%
Epoch 394 - loss: 0.9114, acc: 99.35% / test_loss: 0.9225, test_acc: 98.23%
Epoch 395 - loss: 0.9107, acc: 99.41% / test_loss: 0.9183, test_acc: 98.63%
Epoch 396 - loss: 0.9104, acc: 99.45% / test_loss: 0.9177, test_acc: 98.68%
Epoch 397 - loss: 0.9100, acc: 99.49% / test_loss: 0.9169, test_acc: 98.80%
Epoch 398 - loss: 0.9099, acc: 99.49% / test_loss: 0.9179, test_acc: 98.69%
Epoch 399 - loss: 0.9100, acc: 99.49% / test_loss: 0.9175, test_acc: 98.72%
Epoch 400 - loss: 0.9106, acc: 99.42% / test_loss: 0.9170, test_acc: 98.76%
Best test accuracy 98.88% in epoch 335.
----------------------------------------------------------------------------------------------------
###Markdown
Print the best test accuracy of each run
###Code
for i, a in enumerate(best_test_accs):
print('Run {}: {:.2f}%'.format(i+1, a*100))
###Output
Run 1: 98.46%
Run 2: 98.47%
Run 3: 98.41%
Run 4: 98.81%
Run 5: 98.37%
Run 6: 98.37%
Run 7: 98.51%
Run 8: 98.42%
Run 9: 98.50%
Run 10: 98.88%
###Markdown
1D-CNN Model for ECG Classification- The model used has 2 Conv. layers and 2 FC layers.- This code repeat running the training process and produce all kinds of data which can be given, such as data needed for drawing loss and accuracy graph through epochs, and maximum test accuracy for each run. Get permission of Google Drive access
###Code
from google.colab import drive
drive.mount('/content/gdrive')
root_path = 'gdrive/My Drive/Colab Notebooks'
###Output
Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount("/content/gdrive", force_remount=True).
###Markdown
File name settings
###Code
data_dir = 'mitdb'
train_name = 'train_ecg.hdf5'
test_name = 'test_ecg.hdf5'
all_name = 'all_ecg.hdf5'
model_dir = 'model'
model_name = 'conv2'
model_ext = '.pth'
csv_dir = 'csv'
csv_ext = '.csv'
csv_name = 'conv2'
csv_accs_name = 'accs_conv2'
###Output
_____no_output_____
###Markdown
Import required packages
###Code
import os
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from torch.optim import Adam
import numpy as np
import pandas as pd
import h5py
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
GPU settings
###Code
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
if torch.cuda.is_available():
print(torch.cuda.get_device_name(0))
###Output
Tesla K80
###Markdown
Define `ECG` `Dataset` class
###Code
class ECG(Dataset):
def __init__(self, mode='train'):
if mode == 'train':
with h5py.File(os.path.join(root_path, data_dir, train_name), 'r') as hdf:
self.x = hdf['x_train'][:]
self.y = hdf['y_train'][:]
elif mode == 'test':
with h5py.File(os.path.join(root_path, data_dir, test_name), 'r') as hdf:
self.x = hdf['x_test'][:]
self.y = hdf['y_test'][:]
elif mode == 'all':
with h5py.File(os.path.join(root_path, data_dir, all_name), 'r') as hdf:
self.x = hdf['x'][:]
self.y = hdf['y'][:]
else:
raise ValueError('Argument of mode should be train, test, or all.')
def __len__(self):
return len(self.x)
def __getitem__(self, idx):
return torch.tensor(self.x[idx], dtype=torch.float), torch.tensor(self.y[idx])
###Output
_____no_output_____
###Markdown
Make Batch Generator Batch sizeYou can change it if you want.
###Code
batch_size = 32
###Output
_____no_output_____
###Markdown
`DataLoader` for batch generating`torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False)`
###Code
train_dataset = ECG(mode='train')
test_dataset = ECG(mode='test')
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Size check for single batch
###Code
x_train, y_train = next(iter(train_loader))
print(x_train.size())
print(y_train.size())
###Output
torch.Size([32, 1, 128])
torch.Size([32])
###Markdown
Number of total batches
###Code
total_batch = len(train_loader)
print(total_batch)
###Output
414
###Markdown
Pytorch layer modules for **Conv1D** Network `Conv1d` layer- `torch.nn.Conv1d(in_channels, out_channels, kernel_size)` `MaxPool1d` layer- `torch.nn.MaxPool1d(kernel_size, stride=None)`- Parameter `stride` follows `kernel_size`. `ReLU` layer- `torch.nn.ReLU()` `Linear` layer- `torch.nn.Linear(in_features, out_features, bias=True)` `Softmax` layer- `torch.nn.Softmax(dim=None)`- Parameter `dim` is usually set to `1`. Construct 1D CNN ECG classification model
###Code
class ECGConv1D(nn.Module):
def __init__(self):
super(ECGConv1D, self).__init__()
self.conv1 = nn.Conv1d(1, 16, 7, padding=3) # 128 x 16
self.relu1 = nn.LeakyReLU()
self.pool1 = nn.MaxPool1d(2) # 64 x 16
self.conv2 = nn.Conv1d(16, 16, 5, padding=2) # 64 x 16
self.relu2 = nn.LeakyReLU()
self.pool2 = nn.MaxPool1d(2) # 32 x 16
self.linear3 = nn.Linear(32 * 16, 128)
self.relu3 = nn.LeakyReLU()
self.linear4 = nn.Linear(128, 5)
self.softmax4 = nn.Softmax(dim=1)
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
x = self.pool1(x)
x = self.conv2(x)
x = self.relu2(x)
x = self.pool2(x)
x = x.view(-1, 32 * 16)
x = self.linear3(x)
x = self.relu3(x)
x = self.linear4(x)
x = self.softmax4(x)
return x
ecgnet = ECGConv1D()
ecgnet.to(device)
###Output
_____no_output_____
###Markdown
Training process settings
###Code
run = 10
epoch = 400
lr = 0.001
###Output
_____no_output_____
###Markdown
Traning function
###Code
def train(nrun, model):
criterion = nn.CrossEntropyLoss()
optimizer = Adam(model.parameters(), lr=lr)
train_losses = list()
train_accs = list()
test_losses = list()
test_accs = list()
best_test_acc = 0 # best test accuracy
for e in range(epoch):
print("Epoch {} - ".format(e+1), end='')
# train
train_loss = 0.0
correct, total = 0, 0
for _, batch in enumerate(train_loader):
x, label = batch # get feature and label from a batch
x, label = x.to(device), label.to(device) # send to device
optimizer.zero_grad() # init all grads to zero
output = model(x) # forward propagation
loss = criterion(output, label) # calculate loss
loss.backward() # backward propagation
optimizer.step() # weight update
train_loss += loss.item()
correct += torch.sum(output.argmax(dim=1) == label).item()
total += len(label)
train_losses.append(train_loss / len(train_loader))
train_accs.append(correct / total)
print("loss: {:.4f}, acc: {:.2f}%".format(train_losses[-1], train_accs[-1]*100), end=' / ')
# test
with torch.no_grad():
test_loss = 0.0
correct, total = 0, 0
for _, batch in enumerate(test_loader):
x, label = batch
x, label = x.to(device), label.to(device)
output = model(x)
loss = criterion(output, label)
test_loss += loss.item()
correct += torch.sum(output.argmax(dim=1) == label).item()
total += len(label)
test_losses.append(test_loss / len(test_loader))
test_accs.append(correct / total)
print("test_loss: {:.4f}, test_acc: {:.2f}%".format(test_losses[-1], test_accs[-1]*100))
# save model that has best validation accuracy
if test_accs[-1] > best_test_acc:
best_test_acc = test_accs[-1]
torch.save(model.state_dict(), os.path.join(root_path, model_dir, '_'.join([model_name, str(nrun), 'best']) + model_ext))
# save model for each 10 epochs
if (e + 1) % 10 == 0:
torch.save(model.state_dict(), os.path.join(root_path, model_dir, '_'.join([model_name, str(nrun), str(e+1)]) + model_ext))
return train_losses, train_accs, test_losses, test_accs
###Output
_____no_output_____
###Markdown
Training process Repeat for 10 times
###Code
best_test_accs = list()
for i in range(run):
print('Run', i+1)
ecgnet = ECGConv1D() # init new model
train_losses, train_accs, test_losses, test_accs = train(i, ecgnet.to(device)) # train
best_test_accs.append(max(test_accs)) # get best test accuracy
best_test_acc_epoch = np.array(test_accs).argmax() + 1
print('Best test accuracy {:.2f}% in epoch {}.'.format(best_test_accs[-1]*100, best_test_acc_epoch))
print('-' * 100)
df = pd.DataFrame({ # save model training process into csv file
'loss': train_losses,
'test_loss': test_losses,
'acc': train_accs,
'test_acc': test_accs
})
df.to_csv(os.path.join(root_path, csv_dir, '_'.join([csv_name, str(i+1)]) + csv_ext))
df = pd.DataFrame({'best_test_acc': best_test_accs}) # save best test accuracy of each run
df.to_csv(os.path.join(root_path, csv_dir, csv_accs_name + csv_ext))
###Output
Run 1
Epoch 1 - loss: 1.3349, acc: 57.54% / test_loss: 1.1609, test_acc: 76.32%
Epoch 2 - loss: 1.1277, acc: 79.02% / test_loss: 1.0856, test_acc: 82.60%
Epoch 3 - loss: 1.0807, acc: 82.95% / test_loss: 1.0603, test_acc: 84.69%
Epoch 4 - loss: 1.0573, acc: 85.03% / test_loss: 1.0605, test_acc: 84.92%
Epoch 5 - loss: 1.0396, acc: 86.81% / test_loss: 1.0050, test_acc: 90.91%
Epoch 6 - loss: 0.9978, acc: 91.19% / test_loss: 0.9859, test_acc: 92.15%
Epoch 7 - loss: 0.9887, acc: 92.00% / test_loss: 0.9822, test_acc: 92.29%
Epoch 8 - loss: 0.9856, acc: 92.06% / test_loss: 0.9805, test_acc: 92.56%
Epoch 9 - loss: 0.9816, acc: 92.52% / test_loss: 0.9742, test_acc: 93.15%
Epoch 10 - loss: 0.9796, acc: 92.71% / test_loss: 0.9762, test_acc: 92.99%
Epoch 11 - loss: 0.9788, acc: 92.80% / test_loss: 0.9732, test_acc: 93.26%
Epoch 12 - loss: 0.9749, acc: 93.16% / test_loss: 0.9693, test_acc: 93.70%
Epoch 13 - loss: 0.9739, acc: 93.22% / test_loss: 0.9701, test_acc: 93.49%
Epoch 14 - loss: 0.9740, acc: 93.18% / test_loss: 0.9706, test_acc: 93.47%
Epoch 15 - loss: 0.9734, acc: 93.17% / test_loss: 0.9668, test_acc: 93.95%
Epoch 16 - loss: 0.9722, acc: 93.34% / test_loss: 0.9653, test_acc: 94.04%
Epoch 17 - loss: 0.9707, acc: 93.51% / test_loss: 0.9650, test_acc: 94.09%
Epoch 18 - loss: 0.9720, acc: 93.39% / test_loss: 0.9645, test_acc: 94.13%
Epoch 19 - loss: 0.9678, acc: 93.74% / test_loss: 0.9637, test_acc: 94.20%
Epoch 20 - loss: 0.9674, acc: 93.79% / test_loss: 0.9738, test_acc: 93.16%
Epoch 21 - loss: 0.9671, acc: 93.81% / test_loss: 0.9650, test_acc: 94.09%
Epoch 22 - loss: 0.9636, acc: 94.16% / test_loss: 0.9607, test_acc: 94.47%
Epoch 23 - loss: 0.9631, acc: 94.22% / test_loss: 0.9600, test_acc: 94.56%
Epoch 24 - loss: 0.9634, acc: 94.19% / test_loss: 0.9593, test_acc: 94.63%
Epoch 25 - loss: 0.9623, acc: 94.31% / test_loss: 0.9600, test_acc: 94.59%
Epoch 26 - loss: 0.9604, acc: 94.52% / test_loss: 0.9573, test_acc: 94.81%
Epoch 27 - loss: 0.9613, acc: 94.41% / test_loss: 0.9588, test_acc: 94.65%
Epoch 28 - loss: 0.9622, acc: 94.32% / test_loss: 0.9587, test_acc: 94.63%
Epoch 29 - loss: 0.9603, acc: 94.47% / test_loss: 0.9572, test_acc: 94.80%
Epoch 30 - loss: 0.9598, acc: 94.53% / test_loss: 0.9569, test_acc: 94.81%
Epoch 31 - loss: 0.9607, acc: 94.44% / test_loss: 0.9599, test_acc: 94.56%
Epoch 32 - loss: 0.9595, acc: 94.59% / test_loss: 0.9574, test_acc: 94.71%
Epoch 33 - loss: 0.9609, acc: 94.39% / test_loss: 0.9576, test_acc: 94.77%
Epoch 34 - loss: 0.9585, acc: 94.69% / test_loss: 0.9578, test_acc: 94.75%
Epoch 35 - loss: 0.9614, acc: 94.38% / test_loss: 0.9566, test_acc: 94.84%
Epoch 36 - loss: 0.9594, acc: 94.56% / test_loss: 0.9594, test_acc: 94.59%
Epoch 37 - loss: 0.9588, acc: 94.63% / test_loss: 0.9566, test_acc: 94.84%
Epoch 38 - loss: 0.9590, acc: 94.62% / test_loss: 0.9569, test_acc: 94.79%
Epoch 39 - loss: 0.9593, acc: 94.58% / test_loss: 0.9584, test_acc: 94.70%
Epoch 40 - loss: 0.9584, acc: 94.65% / test_loss: 0.9570, test_acc: 94.81%
Epoch 41 - loss: 0.9577, acc: 94.71% / test_loss: 0.9637, test_acc: 94.11%
Epoch 42 - loss: 0.9585, acc: 94.68% / test_loss: 0.9560, test_acc: 94.86%
Epoch 43 - loss: 0.9580, acc: 94.72% / test_loss: 0.9621, test_acc: 94.37%
Epoch 44 - loss: 0.9581, acc: 94.65% / test_loss: 0.9549, test_acc: 94.99%
Epoch 45 - loss: 0.9579, acc: 94.74% / test_loss: 0.9558, test_acc: 94.91%
Epoch 46 - loss: 0.9579, acc: 94.73% / test_loss: 0.9584, test_acc: 94.64%
Epoch 47 - loss: 0.9577, acc: 94.72% / test_loss: 0.9567, test_acc: 94.82%
Epoch 48 - loss: 0.9577, acc: 94.72% / test_loss: 0.9557, test_acc: 94.94%
Epoch 49 - loss: 0.9564, acc: 94.84% / test_loss: 0.9557, test_acc: 94.91%
Epoch 50 - loss: 0.9564, acc: 94.84% / test_loss: 0.9548, test_acc: 95.02%
Epoch 51 - loss: 0.9574, acc: 94.77% / test_loss: 0.9568, test_acc: 94.83%
Epoch 52 - loss: 0.9581, acc: 94.68% / test_loss: 0.9570, test_acc: 94.80%
Epoch 53 - loss: 0.9564, acc: 94.86% / test_loss: 0.9523, test_acc: 95.29%
Epoch 54 - loss: 0.9544, acc: 95.05% / test_loss: 0.9532, test_acc: 95.21%
Epoch 55 - loss: 0.9543, acc: 95.05% / test_loss: 0.9566, test_acc: 94.84%
Epoch 56 - loss: 0.9539, acc: 95.10% / test_loss: 0.9544, test_acc: 95.05%
Epoch 57 - loss: 0.9550, acc: 94.99% / test_loss: 0.9523, test_acc: 95.24%
Epoch 58 - loss: 0.9548, acc: 95.03% / test_loss: 0.9528, test_acc: 95.25%
Epoch 59 - loss: 0.9537, acc: 95.13% / test_loss: 0.9517, test_acc: 95.32%
Epoch 60 - loss: 0.9532, acc: 95.16% / test_loss: 0.9518, test_acc: 95.32%
Epoch 61 - loss: 0.9532, acc: 95.18% / test_loss: 0.9514, test_acc: 95.36%
Epoch 62 - loss: 0.9539, acc: 95.11% / test_loss: 0.9523, test_acc: 95.26%
Epoch 63 - loss: 0.9537, acc: 95.11% / test_loss: 0.9540, test_acc: 95.08%
Epoch 64 - loss: 0.9538, acc: 95.14% / test_loss: 0.9546, test_acc: 95.02%
Epoch 65 - loss: 0.9540, acc: 95.10% / test_loss: 0.9528, test_acc: 95.20%
Epoch 66 - loss: 0.9534, acc: 95.15% / test_loss: 0.9541, test_acc: 95.06%
Epoch 67 - loss: 0.9533, acc: 95.18% / test_loss: 0.9518, test_acc: 95.31%
Epoch 68 - loss: 0.9524, acc: 95.25% / test_loss: 0.9530, test_acc: 95.15%
Epoch 69 - loss: 0.9525, acc: 95.24% / test_loss: 0.9528, test_acc: 95.23%
Epoch 70 - loss: 0.9526, acc: 95.21% / test_loss: 0.9553, test_acc: 94.96%
Epoch 71 - loss: 0.9553, acc: 94.95% / test_loss: 0.9568, test_acc: 94.75%
Epoch 72 - loss: 0.9527, acc: 95.24% / test_loss: 0.9510, test_acc: 95.39%
Epoch 73 - loss: 0.9520, acc: 95.30% / test_loss: 0.9510, test_acc: 95.37%
Epoch 74 - loss: 0.9524, acc: 95.25% / test_loss: 0.9531, test_acc: 95.17%
Epoch 75 - loss: 0.9519, acc: 95.30% / test_loss: 0.9585, test_acc: 94.54%
Epoch 76 - loss: 0.9521, acc: 95.27% / test_loss: 0.9526, test_acc: 95.22%
Epoch 77 - loss: 0.9524, acc: 95.24% / test_loss: 0.9534, test_acc: 95.15%
Epoch 78 - loss: 0.9516, acc: 95.33% / test_loss: 0.9513, test_acc: 95.34%
Epoch 79 - loss: 0.9526, acc: 95.22% / test_loss: 0.9513, test_acc: 95.37%
Epoch 80 - loss: 0.9507, acc: 95.39% / test_loss: 0.9502, test_acc: 95.45%
Epoch 81 - loss: 0.9503, acc: 95.45% / test_loss: 0.9509, test_acc: 95.36%
Epoch 82 - loss: 0.9509, acc: 95.36% / test_loss: 0.9501, test_acc: 95.46%
Epoch 83 - loss: 0.9501, acc: 95.49% / test_loss: 0.9508, test_acc: 95.43%
Epoch 84 - loss: 0.9511, acc: 95.39% / test_loss: 0.9514, test_acc: 95.33%
Epoch 85 - loss: 0.9512, acc: 95.36% / test_loss: 0.9498, test_acc: 95.52%
Epoch 86 - loss: 0.9495, acc: 95.54% / test_loss: 0.9504, test_acc: 95.44%
Epoch 87 - loss: 0.9500, acc: 95.49% / test_loss: 0.9502, test_acc: 95.49%
Epoch 88 - loss: 0.9493, acc: 95.55% / test_loss: 0.9497, test_acc: 95.48%
Epoch 89 - loss: 0.9507, acc: 95.42% / test_loss: 0.9525, test_acc: 95.24%
Epoch 90 - loss: 0.9514, acc: 95.36% / test_loss: 0.9505, test_acc: 95.42%
Epoch 91 - loss: 0.9499, acc: 95.51% / test_loss: 0.9544, test_acc: 95.08%
Epoch 92 - loss: 0.9498, acc: 95.52% / test_loss: 0.9518, test_acc: 95.28%
Epoch 93 - loss: 0.9499, acc: 95.49% / test_loss: 0.9573, test_acc: 94.68%
Epoch 94 - loss: 0.9498, acc: 95.51% / test_loss: 0.9514, test_acc: 95.36%
Epoch 95 - loss: 0.9505, acc: 95.43% / test_loss: 0.9506, test_acc: 95.44%
Epoch 96 - loss: 0.9500, acc: 95.47% / test_loss: 0.9522, test_acc: 95.26%
Epoch 97 - loss: 0.9498, acc: 95.51% / test_loss: 0.9540, test_acc: 95.06%
Epoch 98 - loss: 0.9495, acc: 95.55% / test_loss: 0.9493, test_acc: 95.56%
Epoch 99 - loss: 0.9496, acc: 95.54% / test_loss: 0.9522, test_acc: 95.26%
Epoch 100 - loss: 0.9488, acc: 95.61% / test_loss: 0.9498, test_acc: 95.49%
Epoch 101 - loss: 0.9488, acc: 95.63% / test_loss: 0.9492, test_acc: 95.58%
Epoch 102 - loss: 0.9510, acc: 95.39% / test_loss: 0.9494, test_acc: 95.57%
Epoch 103 - loss: 0.9491, acc: 95.58% / test_loss: 0.9520, test_acc: 95.30%
Epoch 104 - loss: 0.9508, acc: 95.39% / test_loss: 0.9509, test_acc: 95.38%
Epoch 105 - loss: 0.9502, acc: 95.48% / test_loss: 0.9506, test_acc: 95.41%
Epoch 106 - loss: 0.9501, acc: 95.49% / test_loss: 0.9523, test_acc: 95.24%
Epoch 107 - loss: 0.9486, acc: 95.63% / test_loss: 0.9497, test_acc: 95.52%
Epoch 108 - loss: 0.9474, acc: 95.72% / test_loss: 0.9496, test_acc: 95.50%
Epoch 109 - loss: 0.9460, acc: 95.88% / test_loss: 0.9491, test_acc: 95.55%
Epoch 110 - loss: 0.9469, acc: 95.79% / test_loss: 0.9470, test_acc: 95.75%
Epoch 111 - loss: 0.9466, acc: 95.82% / test_loss: 0.9483, test_acc: 95.63%
Epoch 112 - loss: 0.9462, acc: 95.87% / test_loss: 0.9481, test_acc: 95.65%
Epoch 113 - loss: 0.9483, acc: 95.65% / test_loss: 0.9479, test_acc: 95.69%
Epoch 114 - loss: 0.9446, acc: 96.03% / test_loss: 0.9471, test_acc: 95.77%
Epoch 115 - loss: 0.9448, acc: 96.01% / test_loss: 0.9478, test_acc: 95.70%
Epoch 116 - loss: 0.9436, acc: 96.10% / test_loss: 0.9458, test_acc: 95.87%
Epoch 117 - loss: 0.9433, acc: 96.14% / test_loss: 0.9437, test_acc: 96.10%
Epoch 118 - loss: 0.9425, acc: 96.22% / test_loss: 0.9411, test_acc: 96.35%
Epoch 119 - loss: 0.9387, acc: 96.59% / test_loss: 0.9387, test_acc: 96.58%
Epoch 120 - loss: 0.9363, acc: 96.83% / test_loss: 0.9375, test_acc: 96.74%
Epoch 121 - loss: 0.9325, acc: 97.24% / test_loss: 0.9380, test_acc: 96.68%
Epoch 122 - loss: 0.9330, acc: 97.20% / test_loss: 0.9379, test_acc: 96.67%
Epoch 123 - loss: 0.9334, acc: 97.13% / test_loss: 0.9397, test_acc: 96.50%
Epoch 124 - loss: 0.9313, acc: 97.37% / test_loss: 0.9396, test_acc: 96.49%
Epoch 125 - loss: 0.9321, acc: 97.22% / test_loss: 0.9355, test_acc: 96.93%
Epoch 126 - loss: 0.9313, acc: 97.33% / test_loss: 0.9365, test_acc: 96.82%
Epoch 127 - loss: 0.9307, acc: 97.42% / test_loss: 0.9346, test_acc: 97.04%
Epoch 128 - loss: 0.9304, acc: 97.42% / test_loss: 0.9346, test_acc: 97.01%
Epoch 129 - loss: 0.9290, acc: 97.61% / test_loss: 0.9360, test_acc: 96.92%
Epoch 130 - loss: 0.9305, acc: 97.43% / test_loss: 0.9399, test_acc: 96.47%
Epoch 131 - loss: 0.9312, acc: 97.31% / test_loss: 0.9340, test_acc: 97.09%
Epoch 132 - loss: 0.9293, acc: 97.55% / test_loss: 0.9339, test_acc: 97.09%
Epoch 133 - loss: 0.9284, acc: 97.63% / test_loss: 0.9345, test_acc: 96.96%
Epoch 134 - loss: 0.9299, acc: 97.47% / test_loss: 0.9340, test_acc: 97.07%
Epoch 135 - loss: 0.9279, acc: 97.73% / test_loss: 0.9351, test_acc: 96.96%
Epoch 136 - loss: 0.9279, acc: 97.73% / test_loss: 0.9358, test_acc: 96.90%
Epoch 137 - loss: 0.9283, acc: 97.67% / test_loss: 0.9466, test_acc: 95.78%
Epoch 138 - loss: 0.9271, acc: 97.80% / test_loss: 0.9371, test_acc: 96.75%
Epoch 139 - loss: 0.9276, acc: 97.71% / test_loss: 0.9325, test_acc: 97.19%
Epoch 140 - loss: 0.9272, acc: 97.76% / test_loss: 0.9370, test_acc: 96.80%
Epoch 141 - loss: 0.9269, acc: 97.82% / test_loss: 0.9355, test_acc: 96.90%
Epoch 142 - loss: 0.9266, acc: 97.80% / test_loss: 0.9308, test_acc: 97.40%
Epoch 143 - loss: 0.9253, acc: 97.97% / test_loss: 0.9317, test_acc: 97.27%
Epoch 144 - loss: 0.9252, acc: 97.99% / test_loss: 0.9296, test_acc: 97.54%
Epoch 145 - loss: 0.9242, acc: 98.07% / test_loss: 0.9299, test_acc: 97.46%
Epoch 146 - loss: 0.9251, acc: 97.98% / test_loss: 0.9315, test_acc: 97.32%
Epoch 147 - loss: 0.9257, acc: 97.93% / test_loss: 0.9308, test_acc: 97.37%
Epoch 148 - loss: 0.9259, acc: 97.87% / test_loss: 0.9301, test_acc: 97.46%
Epoch 149 - loss: 0.9250, acc: 97.98% / test_loss: 0.9308, test_acc: 97.40%
Epoch 150 - loss: 0.9258, acc: 97.94% / test_loss: 0.9307, test_acc: 97.38%
Epoch 151 - loss: 0.9242, acc: 98.06% / test_loss: 0.9306, test_acc: 97.42%
Epoch 152 - loss: 0.9248, acc: 97.98% / test_loss: 0.9310, test_acc: 97.31%
Epoch 153 - loss: 0.9253, acc: 97.92% / test_loss: 0.9296, test_acc: 97.47%
Epoch 154 - loss: 0.9246, acc: 98.07% / test_loss: 0.9299, test_acc: 97.46%
Epoch 155 - loss: 0.9239, acc: 98.10% / test_loss: 0.9308, test_acc: 97.36%
Epoch 156 - loss: 0.9235, acc: 98.16% / test_loss: 0.9289, test_acc: 97.59%
Epoch 157 - loss: 0.9242, acc: 98.05% / test_loss: 0.9292, test_acc: 97.58%
Epoch 158 - loss: 0.9197, acc: 98.52% / test_loss: 0.9259, test_acc: 97.89%
Epoch 159 - loss: 0.9222, acc: 98.22% / test_loss: 0.9290, test_acc: 97.55%
Epoch 160 - loss: 0.9192, acc: 98.57% / test_loss: 0.9251, test_acc: 97.96%
Epoch 161 - loss: 0.9191, acc: 98.54% / test_loss: 0.9253, test_acc: 97.95%
Epoch 162 - loss: 0.9194, acc: 98.53% / test_loss: 0.9265, test_acc: 97.81%
Epoch 163 - loss: 0.9192, acc: 98.56% / test_loss: 0.9277, test_acc: 97.68%
Epoch 164 - loss: 0.9194, acc: 98.57% / test_loss: 0.9266, test_acc: 97.82%
Epoch 165 - loss: 0.9204, acc: 98.45% / test_loss: 0.9287, test_acc: 97.59%
Epoch 166 - loss: 0.9196, acc: 98.51% / test_loss: 0.9286, test_acc: 97.64%
Epoch 167 - loss: 0.9185, acc: 98.64% / test_loss: 0.9258, test_acc: 97.88%
Epoch 168 - loss: 0.9196, acc: 98.57% / test_loss: 0.9250, test_acc: 97.96%
Epoch 169 - loss: 0.9189, acc: 98.59% / test_loss: 0.9241, test_acc: 98.06%
Epoch 170 - loss: 0.9173, acc: 98.74% / test_loss: 0.9283, test_acc: 97.61%
Epoch 171 - loss: 0.9171, acc: 98.78% / test_loss: 0.9246, test_acc: 97.96%
Epoch 172 - loss: 0.9172, acc: 98.77% / test_loss: 0.9264, test_acc: 97.88%
Epoch 173 - loss: 0.9205, acc: 98.46% / test_loss: 0.9279, test_acc: 97.71%
Epoch 174 - loss: 0.9189, acc: 98.57% / test_loss: 0.9259, test_acc: 97.89%
Epoch 175 - loss: 0.9182, acc: 98.67% / test_loss: 0.9245, test_acc: 98.02%
Epoch 176 - loss: 0.9171, acc: 98.78% / test_loss: 0.9263, test_acc: 97.79%
Epoch 177 - loss: 0.9180, acc: 98.69% / test_loss: 0.9260, test_acc: 97.92%
Epoch 178 - loss: 0.9186, acc: 98.62% / test_loss: 0.9286, test_acc: 97.68%
Epoch 179 - loss: 0.9171, acc: 98.79% / test_loss: 0.9238, test_acc: 98.07%
Epoch 180 - loss: 0.9168, acc: 98.80% / test_loss: 0.9253, test_acc: 97.98%
Epoch 181 - loss: 0.9171, acc: 98.78% / test_loss: 0.9231, test_acc: 98.20%
Epoch 182 - loss: 0.9164, acc: 98.85% / test_loss: 0.9225, test_acc: 98.23%
Epoch 183 - loss: 0.9162, acc: 98.86% / test_loss: 0.9240, test_acc: 98.10%
Epoch 184 - loss: 0.9185, acc: 98.63% / test_loss: 0.9247, test_acc: 97.98%
Epoch 185 - loss: 0.9167, acc: 98.84% / test_loss: 0.9228, test_acc: 98.20%
Epoch 186 - loss: 0.9160, acc: 98.91% / test_loss: 0.9240, test_acc: 98.07%
Epoch 187 - loss: 0.9172, acc: 98.75% / test_loss: 0.9266, test_acc: 97.83%
Epoch 188 - loss: 0.9166, acc: 98.81% / test_loss: 0.9236, test_acc: 98.13%
Epoch 189 - loss: 0.9171, acc: 98.77% / test_loss: 0.9246, test_acc: 98.03%
Epoch 190 - loss: 0.9175, acc: 98.74% / test_loss: 0.9255, test_acc: 97.89%
Epoch 191 - loss: 0.9173, acc: 98.74% / test_loss: 0.9271, test_acc: 97.74%
Epoch 192 - loss: 0.9179, acc: 98.69% / test_loss: 0.9305, test_acc: 97.44%
Epoch 193 - loss: 0.9181, acc: 98.68% / test_loss: 0.9245, test_acc: 98.01%
Epoch 194 - loss: 0.9163, acc: 98.85% / test_loss: 0.9254, test_acc: 97.92%
Epoch 195 - loss: 0.9156, acc: 98.93% / test_loss: 0.9230, test_acc: 98.20%
Epoch 196 - loss: 0.9159, acc: 98.91% / test_loss: 0.9236, test_acc: 98.13%
Epoch 197 - loss: 0.9158, acc: 98.92% / test_loss: 0.9233, test_acc: 98.19%
Epoch 198 - loss: 0.9161, acc: 98.89% / test_loss: 0.9282, test_acc: 97.67%
Epoch 199 - loss: 0.9179, acc: 98.70% / test_loss: 0.9265, test_acc: 97.82%
Epoch 200 - loss: 0.9165, acc: 98.85% / test_loss: 0.9245, test_acc: 98.01%
Epoch 201 - loss: 0.9167, acc: 98.78% / test_loss: 0.9250, test_acc: 97.96%
Epoch 202 - loss: 0.9167, acc: 98.84% / test_loss: 0.9258, test_acc: 97.85%
Epoch 203 - loss: 0.9160, acc: 98.88% / test_loss: 0.9232, test_acc: 98.14%
Epoch 204 - loss: 0.9158, acc: 98.91% / test_loss: 0.9241, test_acc: 98.07%
Epoch 205 - loss: 0.9168, acc: 98.80% / test_loss: 0.9247, test_acc: 97.98%
Epoch 206 - loss: 0.9189, acc: 98.59% / test_loss: 0.9253, test_acc: 97.95%
Epoch 207 - loss: 0.9166, acc: 98.83% / test_loss: 0.9245, test_acc: 98.03%
Epoch 208 - loss: 0.9158, acc: 98.91% / test_loss: 0.9236, test_acc: 98.10%
Epoch 209 - loss: 0.9161, acc: 98.88% / test_loss: 0.9239, test_acc: 98.09%
Epoch 210 - loss: 0.9156, acc: 98.94% / test_loss: 0.9239, test_acc: 98.07%
Epoch 211 - loss: 0.9170, acc: 98.77% / test_loss: 0.9281, test_acc: 97.60%
Epoch 212 - loss: 0.9151, acc: 98.97% / test_loss: 0.9224, test_acc: 98.26%
Epoch 213 - loss: 0.9161, acc: 98.86% / test_loss: 0.9246, test_acc: 98.00%
Epoch 214 - loss: 0.9169, acc: 98.78% / test_loss: 0.9298, test_acc: 97.49%
Epoch 215 - loss: 0.9166, acc: 98.80% / test_loss: 0.9269, test_acc: 97.80%
Epoch 216 - loss: 0.9159, acc: 98.91% / test_loss: 0.9230, test_acc: 98.19%
Epoch 217 - loss: 0.9164, acc: 98.88% / test_loss: 0.9231, test_acc: 98.18%
Epoch 218 - loss: 0.9167, acc: 98.81% / test_loss: 0.9251, test_acc: 98.00%
Epoch 219 - loss: 0.9167, acc: 98.81% / test_loss: 0.9243, test_acc: 98.05%
Epoch 220 - loss: 0.9157, acc: 98.94% / test_loss: 0.9258, test_acc: 97.91%
Epoch 221 - loss: 0.9164, acc: 98.86% / test_loss: 0.9243, test_acc: 98.02%
Epoch 222 - loss: 0.9160, acc: 98.89% / test_loss: 0.9246, test_acc: 98.04%
Epoch 223 - loss: 0.9163, acc: 98.84% / test_loss: 0.9252, test_acc: 97.92%
Epoch 224 - loss: 0.9156, acc: 98.94% / test_loss: 0.9301, test_acc: 97.46%
Epoch 225 - loss: 0.9162, acc: 98.88% / test_loss: 0.9225, test_acc: 98.23%
Epoch 226 - loss: 0.9148, acc: 99.00% / test_loss: 0.9247, test_acc: 98.01%
Epoch 227 - loss: 0.9163, acc: 98.87% / test_loss: 0.9236, test_acc: 98.11%
Epoch 228 - loss: 0.9156, acc: 98.94% / test_loss: 0.9243, test_acc: 98.04%
Epoch 229 - loss: 0.9162, acc: 98.88% / test_loss: 0.9259, test_acc: 97.87%
Epoch 230 - loss: 0.9176, acc: 98.71% / test_loss: 0.9233, test_acc: 98.12%
Epoch 231 - loss: 0.9155, acc: 98.91% / test_loss: 0.9235, test_acc: 98.14%
Epoch 232 - loss: 0.9162, acc: 98.88% / test_loss: 0.9230, test_acc: 98.18%
Epoch 233 - loss: 0.9162, acc: 98.86% / test_loss: 0.9237, test_acc: 98.10%
Epoch 234 - loss: 0.9167, acc: 98.81% / test_loss: 0.9230, test_acc: 98.18%
Epoch 235 - loss: 0.9172, acc: 98.76% / test_loss: 0.9270, test_acc: 97.79%
Epoch 236 - loss: 0.9167, acc: 98.80% / test_loss: 0.9216, test_acc: 98.33%
Epoch 237 - loss: 0.9154, acc: 98.96% / test_loss: 0.9222, test_acc: 98.26%
Epoch 238 - loss: 0.9162, acc: 98.88% / test_loss: 0.9223, test_acc: 98.26%
Epoch 239 - loss: 0.9164, acc: 98.84% / test_loss: 0.9250, test_acc: 97.95%
Epoch 240 - loss: 0.9152, acc: 98.97% / test_loss: 0.9228, test_acc: 98.22%
Epoch 241 - loss: 0.9144, acc: 99.05% / test_loss: 0.9253, test_acc: 97.96%
Epoch 242 - loss: 0.9159, acc: 98.88% / test_loss: 0.9229, test_acc: 98.19%
Epoch 243 - loss: 0.9156, acc: 98.94% / test_loss: 0.9231, test_acc: 98.17%
Epoch 244 - loss: 0.9169, acc: 98.80% / test_loss: 0.9229, test_acc: 98.18%
Epoch 245 - loss: 0.9155, acc: 98.93% / test_loss: 0.9234, test_acc: 98.14%
Epoch 246 - loss: 0.9164, acc: 98.84% / test_loss: 0.9223, test_acc: 98.24%
Epoch 247 - loss: 0.9147, acc: 99.02% / test_loss: 0.9225, test_acc: 98.18%
Epoch 248 - loss: 0.9164, acc: 98.81% / test_loss: 0.9263, test_acc: 97.85%
Epoch 249 - loss: 0.9159, acc: 98.89% / test_loss: 0.9223, test_acc: 98.26%
Epoch 250 - loss: 0.9148, acc: 99.00% / test_loss: 0.9225, test_acc: 98.24%
Epoch 251 - loss: 0.9152, acc: 98.97% / test_loss: 0.9225, test_acc: 98.20%
Epoch 252 - loss: 0.9153, acc: 98.95% / test_loss: 0.9239, test_acc: 98.07%
Epoch 253 - loss: 0.9139, acc: 99.10% / test_loss: 0.9221, test_acc: 98.26%
Epoch 254 - loss: 0.9157, acc: 98.92% / test_loss: 0.9227, test_acc: 98.18%
Epoch 255 - loss: 0.9149, acc: 99.01% / test_loss: 0.9214, test_acc: 98.32%
Epoch 256 - loss: 0.9145, acc: 99.02% / test_loss: 0.9215, test_acc: 98.35%
Epoch 257 - loss: 0.9159, acc: 98.88% / test_loss: 0.9220, test_acc: 98.28%
Epoch 258 - loss: 0.9152, acc: 98.96% / test_loss: 0.9242, test_acc: 98.01%
Epoch 259 - loss: 0.9163, acc: 98.85% / test_loss: 0.9226, test_acc: 98.23%
Epoch 260 - loss: 0.9155, acc: 98.94% / test_loss: 0.9224, test_acc: 98.23%
Epoch 261 - loss: 0.9138, acc: 99.10% / test_loss: 0.9211, test_acc: 98.38%
Epoch 262 - loss: 0.9147, acc: 99.03% / test_loss: 0.9235, test_acc: 98.11%
Epoch 263 - loss: 0.9158, acc: 98.91% / test_loss: 0.9234, test_acc: 98.16%
Epoch 264 - loss: 0.9157, acc: 98.91% / test_loss: 0.9266, test_acc: 97.81%
Epoch 265 - loss: 0.9154, acc: 98.95% / test_loss: 0.9216, test_acc: 98.32%
Epoch 266 - loss: 0.9150, acc: 98.98% / test_loss: 0.9285, test_acc: 97.64%
Epoch 267 - loss: 0.9158, acc: 98.92% / test_loss: 0.9245, test_acc: 98.01%
Epoch 268 - loss: 0.9159, acc: 98.90% / test_loss: 0.9221, test_acc: 98.30%
Epoch 269 - loss: 0.9151, acc: 98.96% / test_loss: 0.9218, test_acc: 98.30%
Epoch 270 - loss: 0.9146, acc: 99.03% / test_loss: 0.9215, test_acc: 98.32%
Epoch 271 - loss: 0.9136, acc: 99.12% / test_loss: 0.9215, test_acc: 98.31%
Epoch 272 - loss: 0.9139, acc: 99.10% / test_loss: 0.9213, test_acc: 98.35%
Epoch 273 - loss: 0.9159, acc: 98.91% / test_loss: 0.9228, test_acc: 98.20%
Epoch 274 - loss: 0.9165, acc: 98.84% / test_loss: 0.9282, test_acc: 97.61%
Epoch 275 - loss: 0.9164, acc: 98.83% / test_loss: 0.9248, test_acc: 97.98%
Epoch 276 - loss: 0.9164, acc: 98.84% / test_loss: 0.9233, test_acc: 98.14%
Epoch 277 - loss: 0.9148, acc: 99.00% / test_loss: 0.9230, test_acc: 98.14%
Epoch 278 - loss: 0.9143, acc: 99.05% / test_loss: 0.9225, test_acc: 98.23%
Epoch 279 - loss: 0.9148, acc: 99.00% / test_loss: 0.9226, test_acc: 98.23%
Epoch 280 - loss: 0.9140, acc: 99.07% / test_loss: 0.9222, test_acc: 98.27%
Epoch 281 - loss: 0.9144, acc: 99.06% / test_loss: 0.9239, test_acc: 98.08%
Epoch 282 - loss: 0.9153, acc: 98.96% / test_loss: 0.9216, test_acc: 98.32%
Epoch 283 - loss: 0.9139, acc: 99.09% / test_loss: 0.9230, test_acc: 98.17%
Epoch 284 - loss: 0.9149, acc: 99.00% / test_loss: 0.9217, test_acc: 98.29%
Epoch 285 - loss: 0.9157, acc: 98.89% / test_loss: 0.9222, test_acc: 98.27%
Epoch 286 - loss: 0.9143, acc: 99.06% / test_loss: 0.9214, test_acc: 98.35%
Epoch 287 - loss: 0.9141, acc: 99.08% / test_loss: 0.9226, test_acc: 98.23%
Epoch 288 - loss: 0.9140, acc: 99.09% / test_loss: 0.9218, test_acc: 98.29%
Epoch 289 - loss: 0.9140, acc: 99.09% / test_loss: 0.9218, test_acc: 98.32%
Epoch 290 - loss: 0.9141, acc: 99.09% / test_loss: 0.9213, test_acc: 98.33%
Epoch 291 - loss: 0.9150, acc: 98.99% / test_loss: 0.9235, test_acc: 98.14%
Epoch 292 - loss: 0.9158, acc: 98.90% / test_loss: 0.9233, test_acc: 98.14%
Epoch 293 - loss: 0.9140, acc: 99.09% / test_loss: 0.9220, test_acc: 98.28%
Epoch 294 - loss: 0.9161, acc: 98.87% / test_loss: 0.9253, test_acc: 97.94%
Epoch 295 - loss: 0.9163, acc: 98.86% / test_loss: 0.9237, test_acc: 98.11%
Epoch 296 - loss: 0.9147, acc: 99.01% / test_loss: 0.9237, test_acc: 98.10%
Epoch 297 - loss: 0.9145, acc: 99.04% / test_loss: 0.9229, test_acc: 98.16%
Epoch 298 - loss: 0.9159, acc: 98.91% / test_loss: 0.9236, test_acc: 98.07%
Epoch 299 - loss: 0.9144, acc: 99.06% / test_loss: 0.9223, test_acc: 98.24%
Epoch 300 - loss: 0.9156, acc: 98.91% / test_loss: 0.9235, test_acc: 98.13%
Epoch 301 - loss: 0.9141, acc: 99.07% / test_loss: 0.9219, test_acc: 98.29%
Epoch 302 - loss: 0.9144, acc: 99.03% / test_loss: 0.9215, test_acc: 98.32%
Epoch 303 - loss: 0.9146, acc: 99.00% / test_loss: 0.9231, test_acc: 98.18%
Epoch 304 - loss: 0.9138, acc: 99.10% / test_loss: 0.9221, test_acc: 98.26%
Epoch 305 - loss: 0.9152, acc: 98.95% / test_loss: 0.9222, test_acc: 98.26%
Epoch 306 - loss: 0.9141, acc: 99.06% / test_loss: 0.9201, test_acc: 98.46%
Epoch 307 - loss: 0.9143, acc: 99.06% / test_loss: 0.9218, test_acc: 98.29%
Epoch 308 - loss: 0.9138, acc: 99.09% / test_loss: 0.9226, test_acc: 98.20%
Epoch 309 - loss: 0.9144, acc: 99.03% / test_loss: 0.9247, test_acc: 98.00%
Epoch 310 - loss: 0.9167, acc: 98.80% / test_loss: 0.9217, test_acc: 98.31%
Epoch 311 - loss: 0.9147, acc: 99.02% / test_loss: 0.9239, test_acc: 98.07%
Epoch 312 - loss: 0.9138, acc: 99.11% / test_loss: 0.9235, test_acc: 98.11%
Epoch 313 - loss: 0.9149, acc: 99.01% / test_loss: 0.9226, test_acc: 98.20%
Epoch 314 - loss: 0.9147, acc: 99.03% / test_loss: 0.9234, test_acc: 98.14%
Epoch 315 - loss: 0.9147, acc: 99.01% / test_loss: 0.9241, test_acc: 98.07%
Epoch 316 - loss: 0.9157, acc: 98.92% / test_loss: 0.9240, test_acc: 98.07%
Epoch 317 - loss: 0.9146, acc: 99.03% / test_loss: 0.9225, test_acc: 98.23%
Epoch 318 - loss: 0.9132, acc: 99.16% / test_loss: 0.9219, test_acc: 98.28%
Epoch 319 - loss: 0.9146, acc: 99.02% / test_loss: 0.9222, test_acc: 98.26%
Epoch 320 - loss: 0.9150, acc: 98.98% / test_loss: 0.9234, test_acc: 98.12%
Epoch 321 - loss: 0.9148, acc: 99.01% / test_loss: 0.9218, test_acc: 98.30%
Epoch 322 - loss: 0.9134, acc: 99.15% / test_loss: 0.9248, test_acc: 98.00%
Epoch 323 - loss: 0.9136, acc: 99.12% / test_loss: 0.9209, test_acc: 98.39%
Epoch 324 - loss: 0.9147, acc: 99.00% / test_loss: 0.9218, test_acc: 98.28%
Epoch 325 - loss: 0.9139, acc: 99.10% / test_loss: 0.9220, test_acc: 98.25%
Epoch 326 - loss: 0.9139, acc: 99.09% / test_loss: 0.9229, test_acc: 98.20%
Epoch 327 - loss: 0.9149, acc: 99.00% / test_loss: 0.9221, test_acc: 98.27%
Epoch 328 - loss: 0.9141, acc: 99.07% / test_loss: 0.9221, test_acc: 98.26%
Epoch 329 - loss: 0.9145, acc: 99.03% / test_loss: 0.9259, test_acc: 97.88%
Epoch 330 - loss: 0.9134, acc: 99.14% / test_loss: 0.9226, test_acc: 98.21%
Epoch 331 - loss: 0.9145, acc: 99.04% / test_loss: 0.9223, test_acc: 98.25%
Epoch 332 - loss: 0.9137, acc: 99.10% / test_loss: 0.9211, test_acc: 98.35%
Epoch 333 - loss: 0.9141, acc: 99.09% / test_loss: 0.9215, test_acc: 98.32%
Epoch 334 - loss: 0.9164, acc: 98.81% / test_loss: 0.9222, test_acc: 98.23%
Epoch 335 - loss: 0.9154, acc: 98.95% / test_loss: 0.9216, test_acc: 98.29%
Epoch 336 - loss: 0.9148, acc: 99.00% / test_loss: 0.9222, test_acc: 98.26%
Epoch 337 - loss: 0.9132, acc: 99.17% / test_loss: 0.9220, test_acc: 98.28%
Epoch 338 - loss: 0.9141, acc: 99.06% / test_loss: 0.9215, test_acc: 98.32%
Epoch 339 - loss: 0.9133, acc: 99.16% / test_loss: 0.9245, test_acc: 98.03%
Epoch 340 - loss: 0.9144, acc: 99.05% / test_loss: 0.9240, test_acc: 98.04%
Epoch 341 - loss: 0.9150, acc: 98.99% / test_loss: 0.9251, test_acc: 97.93%
Epoch 342 - loss: 0.9144, acc: 99.04% / test_loss: 0.9223, test_acc: 98.23%
Epoch 343 - loss: 0.9136, acc: 99.12% / test_loss: 0.9223, test_acc: 98.26%
Epoch 344 - loss: 0.9151, acc: 98.99% / test_loss: 0.9237, test_acc: 98.13%
Epoch 345 - loss: 0.9137, acc: 99.11% / test_loss: 0.9213, test_acc: 98.32%
Epoch 346 - loss: 0.9129, acc: 99.19% / test_loss: 0.9207, test_acc: 98.37%
Epoch 347 - loss: 0.9147, acc: 99.02% / test_loss: 0.9220, test_acc: 98.28%
Epoch 348 - loss: 0.9134, acc: 99.14% / test_loss: 0.9216, test_acc: 98.32%
Epoch 349 - loss: 0.9137, acc: 99.12% / test_loss: 0.9217, test_acc: 98.32%
Epoch 350 - loss: 0.9142, acc: 99.06% / test_loss: 0.9262, test_acc: 97.86%
Epoch 351 - loss: 0.9167, acc: 98.81% / test_loss: 0.9231, test_acc: 98.18%
Epoch 352 - loss: 0.9142, acc: 99.06% / test_loss: 0.9212, test_acc: 98.36%
Epoch 353 - loss: 0.9137, acc: 99.11% / test_loss: 0.9231, test_acc: 98.16%
Epoch 354 - loss: 0.9142, acc: 99.06% / test_loss: 0.9213, test_acc: 98.35%
Epoch 355 - loss: 0.9140, acc: 99.09% / test_loss: 0.9238, test_acc: 98.10%
Epoch 356 - loss: 0.9142, acc: 99.06% / test_loss: 0.9250, test_acc: 97.93%
Epoch 357 - loss: 0.9147, acc: 99.01% / test_loss: 0.9234, test_acc: 98.13%
Epoch 358 - loss: 0.9135, acc: 99.12% / test_loss: 0.9224, test_acc: 98.23%
Epoch 359 - loss: 0.9143, acc: 99.06% / test_loss: 0.9251, test_acc: 97.96%
Epoch 360 - loss: 0.9142, acc: 99.06% / test_loss: 0.9224, test_acc: 98.22%
Epoch 361 - loss: 0.9147, acc: 99.00% / test_loss: 0.9232, test_acc: 98.18%
Epoch 362 - loss: 0.9148, acc: 99.00% / test_loss: 0.9219, test_acc: 98.27%
Epoch 363 - loss: 0.9136, acc: 99.12% / test_loss: 0.9225, test_acc: 98.24%
Epoch 364 - loss: 0.9132, acc: 99.16% / test_loss: 0.9218, test_acc: 98.29%
Epoch 365 - loss: 0.9153, acc: 98.94% / test_loss: 0.9250, test_acc: 97.98%
Epoch 366 - loss: 0.9150, acc: 98.99% / test_loss: 0.9231, test_acc: 98.14%
Epoch 367 - loss: 0.9137, acc: 99.11% / test_loss: 0.9211, test_acc: 98.38%
Epoch 368 - loss: 0.9133, acc: 99.15% / test_loss: 0.9209, test_acc: 98.39%
Epoch 369 - loss: 0.9145, acc: 99.03% / test_loss: 0.9217, test_acc: 98.31%
Epoch 370 - loss: 0.9141, acc: 99.08% / test_loss: 0.9229, test_acc: 98.20%
Epoch 371 - loss: 0.9143, acc: 99.06% / test_loss: 0.9236, test_acc: 98.14%
Epoch 372 - loss: 0.9175, acc: 98.74% / test_loss: 0.9325, test_acc: 97.18%
Epoch 373 - loss: 0.9159, acc: 98.88% / test_loss: 0.9204, test_acc: 98.44%
Epoch 374 - loss: 0.9135, acc: 99.13% / test_loss: 0.9213, test_acc: 98.32%
Epoch 375 - loss: 0.9147, acc: 99.01% / test_loss: 0.9211, test_acc: 98.38%
Epoch 376 - loss: 0.9148, acc: 98.98% / test_loss: 0.9220, test_acc: 98.25%
Epoch 377 - loss: 0.9144, acc: 99.03% / test_loss: 0.9236, test_acc: 98.11%
Epoch 378 - loss: 0.9139, acc: 99.09% / test_loss: 0.9219, test_acc: 98.28%
Epoch 379 - loss: 0.9143, acc: 99.06% / test_loss: 0.9216, test_acc: 98.31%
Epoch 380 - loss: 0.9147, acc: 99.00% / test_loss: 0.9235, test_acc: 98.10%
Epoch 381 - loss: 0.9141, acc: 99.07% / test_loss: 0.9210, test_acc: 98.35%
Epoch 382 - loss: 0.9140, acc: 99.09% / test_loss: 0.9252, test_acc: 97.95%
Epoch 383 - loss: 0.9143, acc: 99.05% / test_loss: 0.9216, test_acc: 98.31%
Epoch 384 - loss: 0.9136, acc: 99.12% / test_loss: 0.9207, test_acc: 98.41%
Epoch 385 - loss: 0.9142, acc: 99.07% / test_loss: 0.9261, test_acc: 97.88%
Epoch 386 - loss: 0.9154, acc: 98.93% / test_loss: 0.9227, test_acc: 98.22%
Epoch 387 - loss: 0.9133, acc: 99.15% / test_loss: 0.9216, test_acc: 98.32%
Epoch 388 - loss: 0.9128, acc: 99.20% / test_loss: 0.9209, test_acc: 98.39%
Epoch 389 - loss: 0.9125, acc: 99.23% / test_loss: 0.9209, test_acc: 98.38%
Epoch 390 - loss: 0.9133, acc: 99.15% / test_loss: 0.9260, test_acc: 97.89%
Epoch 391 - loss: 0.9156, acc: 98.92% / test_loss: 0.9239, test_acc: 98.07%
Epoch 392 - loss: 0.9139, acc: 99.11% / test_loss: 0.9230, test_acc: 98.16%
Epoch 393 - loss: 0.9142, acc: 99.06% / test_loss: 0.9209, test_acc: 98.40%
Epoch 394 - loss: 0.9129, acc: 99.20% / test_loss: 0.9212, test_acc: 98.38%
Epoch 395 - loss: 0.9133, acc: 99.15% / test_loss: 0.9206, test_acc: 98.41%
Epoch 396 - loss: 0.9143, acc: 99.05% / test_loss: 0.9233, test_acc: 98.15%
Epoch 397 - loss: 0.9148, acc: 99.00% / test_loss: 0.9228, test_acc: 98.21%
Epoch 398 - loss: 0.9149, acc: 98.99% / test_loss: 0.9263, test_acc: 97.81%
Epoch 399 - loss: 0.9147, acc: 99.00% / test_loss: 0.9226, test_acc: 98.23%
Epoch 400 - loss: 0.9143, acc: 99.06% / test_loss: 0.9213, test_acc: 98.35%
Best test accuracy 98.46% in epoch 306.
----------------------------------------------------------------------------------------------------
Run 2
Epoch 1 - loss: 1.3611, acc: 55.37% / test_loss: 1.2540, test_acc: 64.42%
Epoch 2 - loss: 1.1441, acc: 77.27% / test_loss: 1.0729, test_acc: 84.79%
Epoch 3 - loss: 1.0593, acc: 85.32% / test_loss: 1.0374, test_acc: 87.67%
Epoch 4 - loss: 1.0470, acc: 86.33% / test_loss: 1.0413, test_acc: 86.73%
Epoch 5 - loss: 1.0408, acc: 86.60% / test_loss: 1.0293, test_acc: 87.67%
Epoch 6 - loss: 1.0347, acc: 87.12% / test_loss: 1.0218, test_acc: 88.44%
Epoch 7 - loss: 1.0315, acc: 87.47% / test_loss: 1.0236, test_acc: 88.23%
Epoch 8 - loss: 1.0282, acc: 87.70% / test_loss: 1.0220, test_acc: 88.38%
Epoch 9 - loss: 1.0235, acc: 88.15% / test_loss: 1.0176, test_acc: 88.81%
Epoch 10 - loss: 1.0241, acc: 88.09% / test_loss: 1.0149, test_acc: 89.01%
Epoch 11 - loss: 1.0232, acc: 88.21% / test_loss: 1.0114, test_acc: 89.25%
Epoch 12 - loss: 1.0214, acc: 88.30% / test_loss: 1.0114, test_acc: 89.28%
Epoch 13 - loss: 1.0200, acc: 88.46% / test_loss: 1.0133, test_acc: 89.10%
Epoch 14 - loss: 1.0199, acc: 88.39% / test_loss: 1.0131, test_acc: 89.16%
Epoch 15 - loss: 1.0162, acc: 88.79% / test_loss: 1.0111, test_acc: 89.27%
Epoch 16 - loss: 1.0153, acc: 88.86% / test_loss: 1.0111, test_acc: 89.26%
Epoch 17 - loss: 1.0158, acc: 88.81% / test_loss: 1.0086, test_acc: 89.47%
Epoch 18 - loss: 1.0150, acc: 88.86% / test_loss: 1.0074, test_acc: 89.65%
Epoch 19 - loss: 1.0134, acc: 89.05% / test_loss: 1.0072, test_acc: 89.60%
Epoch 20 - loss: 1.0113, acc: 89.24% / test_loss: 1.0088, test_acc: 89.50%
Epoch 21 - loss: 1.0132, acc: 89.04% / test_loss: 1.0054, test_acc: 89.77%
Epoch 22 - loss: 1.0097, acc: 89.40% / test_loss: 1.0032, test_acc: 90.06%
Epoch 23 - loss: 1.0080, acc: 89.57% / test_loss: 1.0013, test_acc: 90.19%
Epoch 24 - loss: 1.0084, acc: 89.53% / test_loss: 1.0032, test_acc: 90.09%
Epoch 25 - loss: 1.0075, acc: 89.60% / test_loss: 1.0028, test_acc: 90.03%
Epoch 26 - loss: 1.0072, acc: 89.60% / test_loss: 1.0014, test_acc: 90.15%
Epoch 27 - loss: 1.0057, acc: 89.72% / test_loss: 1.0017, test_acc: 90.09%
Epoch 28 - loss: 1.0052, acc: 89.77% / test_loss: 1.0063, test_acc: 89.76%
Epoch 29 - loss: 1.0063, acc: 89.69% / test_loss: 0.9996, test_acc: 90.34%
Epoch 30 - loss: 1.0052, acc: 89.78% / test_loss: 1.0029, test_acc: 90.00%
Epoch 31 - loss: 1.0050, acc: 89.82% / test_loss: 1.0021, test_acc: 90.15%
Epoch 32 - loss: 1.0058, acc: 89.76% / test_loss: 0.9996, test_acc: 90.36%
Epoch 33 - loss: 1.0034, acc: 89.92% / test_loss: 1.0028, test_acc: 90.09%
Epoch 34 - loss: 1.0054, acc: 89.77% / test_loss: 1.0117, test_acc: 89.10%
Epoch 35 - loss: 1.0041, acc: 89.89% / test_loss: 0.9986, test_acc: 90.41%
Epoch 36 - loss: 1.0018, acc: 90.09% / test_loss: 0.9978, test_acc: 90.48%
Epoch 37 - loss: 1.0022, acc: 90.08% / test_loss: 0.9991, test_acc: 90.34%
Epoch 38 - loss: 1.0026, acc: 90.03% / test_loss: 1.0001, test_acc: 90.28%
Epoch 39 - loss: 1.0017, acc: 90.09% / test_loss: 0.9979, test_acc: 90.46%
Epoch 40 - loss: 1.0019, acc: 90.06% / test_loss: 0.9988, test_acc: 90.43%
Epoch 41 - loss: 1.0014, acc: 90.12% / test_loss: 0.9975, test_acc: 90.49%
Epoch 42 - loss: 1.0008, acc: 90.18% / test_loss: 0.9980, test_acc: 90.46%
Epoch 43 - loss: 1.0008, acc: 90.18% / test_loss: 0.9979, test_acc: 90.55%
Epoch 44 - loss: 1.0009, acc: 90.15% / test_loss: 0.9966, test_acc: 90.57%
Epoch 45 - loss: 0.9994, acc: 90.32% / test_loss: 0.9958, test_acc: 90.66%
Epoch 46 - loss: 0.9985, acc: 90.40% / test_loss: 0.9950, test_acc: 90.71%
Epoch 47 - loss: 0.9977, acc: 90.48% / test_loss: 0.9964, test_acc: 90.59%
Epoch 48 - loss: 0.9982, acc: 90.47% / test_loss: 0.9979, test_acc: 90.48%
Epoch 49 - loss: 0.9980, acc: 90.43% / test_loss: 0.9965, test_acc: 90.55%
Epoch 50 - loss: 0.9994, acc: 90.28% / test_loss: 0.9965, test_acc: 90.64%
Epoch 51 - loss: 0.9984, acc: 90.38% / test_loss: 0.9957, test_acc: 90.74%
Epoch 52 - loss: 0.9962, acc: 90.63% / test_loss: 0.9997, test_acc: 90.27%
Epoch 53 - loss: 0.9970, acc: 90.59% / test_loss: 0.9961, test_acc: 90.63%
Epoch 54 - loss: 0.9942, acc: 90.80% / test_loss: 0.9944, test_acc: 90.85%
Epoch 55 - loss: 0.9917, acc: 91.05% / test_loss: 1.0024, test_acc: 89.94%
Epoch 56 - loss: 0.9886, acc: 91.41% / test_loss: 0.9862, test_acc: 91.59%
Epoch 57 - loss: 0.9871, acc: 91.51% / test_loss: 0.9856, test_acc: 91.72%
Epoch 58 - loss: 0.9832, acc: 91.88% / test_loss: 0.9827, test_acc: 91.97%
Epoch 59 - loss: 0.9826, acc: 92.00% / test_loss: 0.9856, test_acc: 91.79%
Epoch 60 - loss: 0.9820, acc: 92.00% / test_loss: 0.9844, test_acc: 91.94%
Epoch 61 - loss: 0.9807, acc: 92.18% / test_loss: 0.9832, test_acc: 91.89%
Epoch 62 - loss: 0.9794, acc: 92.26% / test_loss: 0.9811, test_acc: 92.10%
Epoch 63 - loss: 0.9800, acc: 92.21% / test_loss: 0.9831, test_acc: 91.91%
Epoch 64 - loss: 0.9790, acc: 92.34% / test_loss: 0.9805, test_acc: 92.20%
Epoch 65 - loss: 0.9781, acc: 92.43% / test_loss: 0.9790, test_acc: 92.28%
Epoch 66 - loss: 0.9786, acc: 92.35% / test_loss: 0.9814, test_acc: 92.04%
Epoch 67 - loss: 0.9764, acc: 92.56% / test_loss: 0.9797, test_acc: 92.19%
Epoch 68 - loss: 0.9775, acc: 92.47% / test_loss: 0.9784, test_acc: 92.29%
Epoch 69 - loss: 0.9775, acc: 92.46% / test_loss: 0.9818, test_acc: 91.99%
Epoch 70 - loss: 0.9781, acc: 92.43% / test_loss: 0.9790, test_acc: 92.28%
Epoch 71 - loss: 0.9767, acc: 92.50% / test_loss: 0.9777, test_acc: 92.40%
Epoch 72 - loss: 0.9764, acc: 92.56% / test_loss: 0.9793, test_acc: 92.25%
Epoch 73 - loss: 0.9777, acc: 92.44% / test_loss: 0.9785, test_acc: 92.35%
Epoch 74 - loss: 0.9763, acc: 92.53% / test_loss: 0.9786, test_acc: 92.37%
Epoch 75 - loss: 0.9757, acc: 92.61% / test_loss: 0.9776, test_acc: 92.43%
Epoch 76 - loss: 0.9752, acc: 92.63% / test_loss: 0.9800, test_acc: 92.30%
Epoch 77 - loss: 0.9755, acc: 92.63% / test_loss: 0.9799, test_acc: 92.21%
Epoch 78 - loss: 0.9749, acc: 92.69% / test_loss: 0.9768, test_acc: 92.53%
Epoch 79 - loss: 0.9750, acc: 92.69% / test_loss: 0.9770, test_acc: 92.49%
Epoch 80 - loss: 0.9755, acc: 92.62% / test_loss: 0.9764, test_acc: 92.57%
Epoch 81 - loss: 0.9750, acc: 92.70% / test_loss: 0.9822, test_acc: 91.95%
Epoch 82 - loss: 0.9749, acc: 92.68% / test_loss: 0.9779, test_acc: 92.41%
Epoch 83 - loss: 0.9744, acc: 92.74% / test_loss: 0.9759, test_acc: 92.56%
Epoch 84 - loss: 0.9740, acc: 92.77% / test_loss: 0.9796, test_acc: 92.23%
Epoch 85 - loss: 0.9736, acc: 92.81% / test_loss: 0.9780, test_acc: 92.36%
Epoch 86 - loss: 0.9767, acc: 92.50% / test_loss: 0.9791, test_acc: 92.36%
Epoch 87 - loss: 0.9745, acc: 92.71% / test_loss: 0.9788, test_acc: 92.37%
Epoch 88 - loss: 0.9736, acc: 92.79% / test_loss: 0.9769, test_acc: 92.49%
Epoch 89 - loss: 0.9744, acc: 92.74% / test_loss: 0.9762, test_acc: 92.56%
Epoch 90 - loss: 0.9738, acc: 92.81% / test_loss: 0.9775, test_acc: 92.46%
Epoch 91 - loss: 0.9738, acc: 92.79% / test_loss: 0.9868, test_acc: 91.50%
Epoch 92 - loss: 0.9738, acc: 92.82% / test_loss: 0.9761, test_acc: 92.55%
Epoch 93 - loss: 0.9725, acc: 92.91% / test_loss: 0.9760, test_acc: 92.53%
Epoch 94 - loss: 0.9739, acc: 92.79% / test_loss: 0.9772, test_acc: 92.44%
Epoch 95 - loss: 0.9729, acc: 92.87% / test_loss: 0.9815, test_acc: 92.00%
Epoch 96 - loss: 0.9728, acc: 92.87% / test_loss: 0.9756, test_acc: 92.61%
Epoch 97 - loss: 0.9732, acc: 92.85% / test_loss: 0.9770, test_acc: 92.51%
Epoch 98 - loss: 0.9719, acc: 92.95% / test_loss: 0.9748, test_acc: 92.65%
Epoch 99 - loss: 0.9726, acc: 92.93% / test_loss: 0.9767, test_acc: 92.48%
Epoch 100 - loss: 0.9717, acc: 92.99% / test_loss: 0.9812, test_acc: 92.00%
Epoch 101 - loss: 0.9737, acc: 92.78% / test_loss: 0.9761, test_acc: 92.57%
Epoch 102 - loss: 0.9734, acc: 92.80% / test_loss: 0.9773, test_acc: 92.46%
Epoch 103 - loss: 0.9738, acc: 92.77% / test_loss: 0.9767, test_acc: 92.46%
Epoch 104 - loss: 0.9733, acc: 92.80% / test_loss: 0.9767, test_acc: 92.48%
Epoch 105 - loss: 0.9737, acc: 92.81% / test_loss: 0.9775, test_acc: 92.38%
Epoch 106 - loss: 0.9724, acc: 92.89% / test_loss: 0.9768, test_acc: 92.46%
Epoch 107 - loss: 0.9729, acc: 92.87% / test_loss: 0.9768, test_acc: 92.48%
Epoch 108 - loss: 0.9715, acc: 92.98% / test_loss: 0.9756, test_acc: 92.58%
Epoch 109 - loss: 0.9717, acc: 92.96% / test_loss: 0.9748, test_acc: 92.65%
Epoch 110 - loss: 0.9718, acc: 92.95% / test_loss: 0.9772, test_acc: 92.45%
Epoch 111 - loss: 0.9729, acc: 92.87% / test_loss: 0.9764, test_acc: 92.51%
Epoch 112 - loss: 0.9746, acc: 92.72% / test_loss: 0.9753, test_acc: 92.59%
Epoch 113 - loss: 0.9713, acc: 92.99% / test_loss: 0.9775, test_acc: 92.42%
Epoch 114 - loss: 0.9716, acc: 92.98% / test_loss: 0.9747, test_acc: 92.68%
Epoch 115 - loss: 0.9733, acc: 92.82% / test_loss: 0.9777, test_acc: 92.38%
Epoch 116 - loss: 0.9723, acc: 92.95% / test_loss: 0.9751, test_acc: 92.60%
Epoch 117 - loss: 0.9713, acc: 93.03% / test_loss: 0.9755, test_acc: 92.61%
Epoch 118 - loss: 0.9726, acc: 92.87% / test_loss: 0.9748, test_acc: 92.65%
Epoch 119 - loss: 0.9709, acc: 93.05% / test_loss: 0.9742, test_acc: 92.74%
Epoch 120 - loss: 0.9712, acc: 92.99% / test_loss: 0.9765, test_acc: 92.50%
Epoch 121 - loss: 0.9730, acc: 92.84% / test_loss: 0.9769, test_acc: 92.47%
Epoch 122 - loss: 0.9717, acc: 92.95% / test_loss: 0.9756, test_acc: 92.59%
Epoch 123 - loss: 0.9716, acc: 92.96% / test_loss: 0.9750, test_acc: 92.65%
Epoch 124 - loss: 0.9713, acc: 92.99% / test_loss: 0.9753, test_acc: 92.62%
Epoch 125 - loss: 0.9704, acc: 93.09% / test_loss: 0.9748, test_acc: 92.65%
Epoch 126 - loss: 0.9718, acc: 92.93% / test_loss: 0.9780, test_acc: 92.34%
Epoch 127 - loss: 0.9713, acc: 93.01% / test_loss: 0.9754, test_acc: 92.56%
Epoch 128 - loss: 0.9708, acc: 93.05% / test_loss: 0.9763, test_acc: 92.50%
Epoch 129 - loss: 0.9715, acc: 92.98% / test_loss: 0.9741, test_acc: 92.71%
Epoch 130 - loss: 0.9701, acc: 93.10% / test_loss: 0.9766, test_acc: 92.45%
Epoch 131 - loss: 0.9711, acc: 93.01% / test_loss: 0.9757, test_acc: 92.58%
Epoch 132 - loss: 0.9701, acc: 93.12% / test_loss: 0.9740, test_acc: 92.72%
Epoch 133 - loss: 0.9701, acc: 93.10% / test_loss: 0.9769, test_acc: 92.40%
Epoch 134 - loss: 0.9713, acc: 93.01% / test_loss: 0.9766, test_acc: 92.47%
Epoch 135 - loss: 0.9708, acc: 93.04% / test_loss: 0.9762, test_acc: 92.56%
Epoch 136 - loss: 0.9703, acc: 93.11% / test_loss: 0.9754, test_acc: 92.58%
Epoch 137 - loss: 0.9708, acc: 93.04% / test_loss: 0.9751, test_acc: 92.63%
Epoch 138 - loss: 0.9707, acc: 93.06% / test_loss: 0.9743, test_acc: 92.70%
Epoch 139 - loss: 0.9722, acc: 92.91% / test_loss: 0.9757, test_acc: 92.50%
Epoch 140 - loss: 0.9713, acc: 92.98% / test_loss: 0.9750, test_acc: 92.64%
Epoch 141 - loss: 0.9696, acc: 93.16% / test_loss: 0.9742, test_acc: 92.71%
Epoch 142 - loss: 0.9691, acc: 93.22% / test_loss: 0.9744, test_acc: 92.67%
Epoch 143 - loss: 0.9688, acc: 93.24% / test_loss: 0.9744, test_acc: 92.65%
Epoch 144 - loss: 0.9685, acc: 93.27% / test_loss: 0.9762, test_acc: 92.53%
Epoch 145 - loss: 0.9702, acc: 93.10% / test_loss: 0.9744, test_acc: 92.68%
Epoch 146 - loss: 0.9692, acc: 93.20% / test_loss: 0.9736, test_acc: 92.77%
Epoch 147 - loss: 0.9702, acc: 93.11% / test_loss: 0.9764, test_acc: 92.50%
Epoch 148 - loss: 0.9692, acc: 93.21% / test_loss: 0.9733, test_acc: 92.80%
Epoch 149 - loss: 0.9689, acc: 93.20% / test_loss: 0.9744, test_acc: 92.68%
Epoch 150 - loss: 0.9691, acc: 93.23% / test_loss: 0.9758, test_acc: 92.64%
Epoch 151 - loss: 0.9685, acc: 93.29% / test_loss: 0.9751, test_acc: 92.59%
Epoch 152 - loss: 0.9690, acc: 93.21% / test_loss: 0.9726, test_acc: 92.90%
Epoch 153 - loss: 0.9683, acc: 93.26% / test_loss: 0.9756, test_acc: 92.65%
Epoch 154 - loss: 0.9676, acc: 93.36% / test_loss: 0.9716, test_acc: 92.95%
Epoch 155 - loss: 0.9668, acc: 93.42% / test_loss: 0.9725, test_acc: 92.87%
Epoch 156 - loss: 0.9690, acc: 93.20% / test_loss: 0.9763, test_acc: 92.54%
Epoch 157 - loss: 0.9676, acc: 93.36% / test_loss: 0.9717, test_acc: 92.96%
Epoch 158 - loss: 0.9665, acc: 93.46% / test_loss: 0.9715, test_acc: 92.99%
Epoch 159 - loss: 0.9663, acc: 93.47% / test_loss: 0.9706, test_acc: 93.05%
Epoch 160 - loss: 0.9669, acc: 93.43% / test_loss: 0.9745, test_acc: 92.68%
Epoch 161 - loss: 0.9661, acc: 93.50% / test_loss: 0.9718, test_acc: 92.96%
Epoch 162 - loss: 0.9672, acc: 93.39% / test_loss: 0.9731, test_acc: 92.78%
Epoch 163 - loss: 0.9667, acc: 93.46% / test_loss: 0.9710, test_acc: 93.05%
Epoch 164 - loss: 0.9659, acc: 93.52% / test_loss: 0.9705, test_acc: 93.09%
Epoch 165 - loss: 0.9671, acc: 93.43% / test_loss: 0.9713, test_acc: 92.93%
Epoch 166 - loss: 0.9669, acc: 93.44% / test_loss: 0.9761, test_acc: 92.50%
Epoch 167 - loss: 0.9665, acc: 93.47% / test_loss: 0.9706, test_acc: 93.08%
Epoch 168 - loss: 0.9670, acc: 93.44% / test_loss: 0.9711, test_acc: 93.01%
Epoch 169 - loss: 0.9663, acc: 93.47% / test_loss: 0.9689, test_acc: 93.25%
Epoch 170 - loss: 0.9645, acc: 93.64% / test_loss: 0.9706, test_acc: 93.08%
Epoch 171 - loss: 0.9652, acc: 93.58% / test_loss: 0.9712, test_acc: 92.99%
Epoch 172 - loss: 0.9669, acc: 93.40% / test_loss: 0.9695, test_acc: 93.16%
Epoch 173 - loss: 0.9660, acc: 93.51% / test_loss: 0.9719, test_acc: 92.96%
Epoch 174 - loss: 0.9663, acc: 93.50% / test_loss: 0.9695, test_acc: 93.14%
Epoch 175 - loss: 0.9652, acc: 93.59% / test_loss: 0.9690, test_acc: 93.22%
Epoch 176 - loss: 0.9644, acc: 93.65% / test_loss: 0.9702, test_acc: 93.08%
Epoch 177 - loss: 0.9660, acc: 93.52% / test_loss: 0.9727, test_acc: 92.83%
Epoch 178 - loss: 0.9668, acc: 93.42% / test_loss: 0.9699, test_acc: 93.14%
Epoch 179 - loss: 0.9659, acc: 93.50% / test_loss: 0.9697, test_acc: 93.13%
Epoch 180 - loss: 0.9647, acc: 93.64% / test_loss: 0.9695, test_acc: 93.17%
Epoch 181 - loss: 0.9652, acc: 93.57% / test_loss: 0.9711, test_acc: 93.01%
Epoch 182 - loss: 0.9662, acc: 93.51% / test_loss: 0.9694, test_acc: 93.17%
Epoch 183 - loss: 0.9659, acc: 93.52% / test_loss: 0.9733, test_acc: 92.78%
Epoch 184 - loss: 0.9662, acc: 93.47% / test_loss: 0.9694, test_acc: 93.18%
Epoch 185 - loss: 0.9641, acc: 93.69% / test_loss: 0.9688, test_acc: 93.24%
Epoch 186 - loss: 0.9636, acc: 93.72% / test_loss: 0.9702, test_acc: 93.07%
Epoch 187 - loss: 0.9660, acc: 93.48% / test_loss: 0.9696, test_acc: 93.18%
Epoch 188 - loss: 0.9663, acc: 93.49% / test_loss: 0.9766, test_acc: 92.52%
Epoch 189 - loss: 0.9649, acc: 93.64% / test_loss: 0.9710, test_acc: 93.02%
Epoch 190 - loss: 0.9651, acc: 93.59% / test_loss: 0.9697, test_acc: 93.14%
Epoch 191 - loss: 0.9640, acc: 93.69% / test_loss: 0.9689, test_acc: 93.20%
Epoch 192 - loss: 0.9647, acc: 93.61% / test_loss: 0.9708, test_acc: 93.06%
Epoch 193 - loss: 0.9647, acc: 93.62% / test_loss: 0.9703, test_acc: 93.08%
Epoch 194 - loss: 0.9641, acc: 93.70% / test_loss: 0.9700, test_acc: 93.11%
Epoch 195 - loss: 0.9643, acc: 93.67% / test_loss: 0.9687, test_acc: 93.24%
Epoch 196 - loss: 0.9637, acc: 93.72% / test_loss: 0.9689, test_acc: 93.21%
Epoch 197 - loss: 0.9668, acc: 93.42% / test_loss: 0.9697, test_acc: 93.12%
Epoch 198 - loss: 0.9645, acc: 93.67% / test_loss: 0.9691, test_acc: 93.20%
Epoch 199 - loss: 0.9645, acc: 93.64% / test_loss: 0.9691, test_acc: 93.18%
Epoch 200 - loss: 0.9643, acc: 93.66% / test_loss: 0.9705, test_acc: 93.13%
Epoch 201 - loss: 0.9647, acc: 93.63% / test_loss: 0.9688, test_acc: 93.22%
Epoch 202 - loss: 0.9638, acc: 93.75% / test_loss: 0.9702, test_acc: 93.14%
Epoch 203 - loss: 0.9619, acc: 93.94% / test_loss: 0.9806, test_acc: 92.13%
Epoch 204 - loss: 0.9606, acc: 94.07% / test_loss: 0.9710, test_acc: 93.11%
Epoch 205 - loss: 0.9615, acc: 93.95% / test_loss: 0.9672, test_acc: 93.48%
Epoch 206 - loss: 0.9606, acc: 94.10% / test_loss: 0.9667, test_acc: 93.50%
Epoch 207 - loss: 0.9267, acc: 97.82% / test_loss: 0.9281, test_acc: 97.71%
Epoch 208 - loss: 0.9178, acc: 98.76% / test_loss: 0.9258, test_acc: 97.98%
Epoch 209 - loss: 0.9178, acc: 98.73% / test_loss: 0.9259, test_acc: 97.86%
Epoch 210 - loss: 0.9172, acc: 98.78% / test_loss: 0.9276, test_acc: 97.74%
Epoch 211 - loss: 0.9166, acc: 98.84% / test_loss: 0.9249, test_acc: 97.98%
Epoch 212 - loss: 0.9173, acc: 98.78% / test_loss: 0.9239, test_acc: 98.16%
Epoch 213 - loss: 0.9165, acc: 98.84% / test_loss: 0.9275, test_acc: 97.79%
Epoch 214 - loss: 0.9169, acc: 98.80% / test_loss: 0.9265, test_acc: 97.87%
Epoch 215 - loss: 0.9177, acc: 98.75% / test_loss: 0.9235, test_acc: 98.15%
Epoch 216 - loss: 0.9153, acc: 98.96% / test_loss: 0.9241, test_acc: 98.09%
Epoch 217 - loss: 0.9151, acc: 98.97% / test_loss: 0.9276, test_acc: 97.73%
Epoch 218 - loss: 0.9170, acc: 98.78% / test_loss: 0.9249, test_acc: 97.97%
Epoch 219 - loss: 0.9160, acc: 98.89% / test_loss: 0.9219, test_acc: 98.32%
Epoch 220 - loss: 0.9157, acc: 98.94% / test_loss: 0.9238, test_acc: 98.10%
Epoch 221 - loss: 0.9150, acc: 99.00% / test_loss: 0.9238, test_acc: 98.12%
Epoch 222 - loss: 0.9152, acc: 98.97% / test_loss: 0.9242, test_acc: 98.03%
Epoch 223 - loss: 0.9157, acc: 98.93% / test_loss: 0.9227, test_acc: 98.22%
Epoch 224 - loss: 0.9146, acc: 99.06% / test_loss: 0.9231, test_acc: 98.15%
Epoch 225 - loss: 0.9162, acc: 98.89% / test_loss: 0.9247, test_acc: 98.01%
Epoch 226 - loss: 0.9149, acc: 99.02% / test_loss: 0.9290, test_acc: 97.61%
Epoch 227 - loss: 0.9172, acc: 98.76% / test_loss: 0.9226, test_acc: 98.23%
Epoch 228 - loss: 0.9146, acc: 99.04% / test_loss: 0.9231, test_acc: 98.14%
Epoch 229 - loss: 0.9146, acc: 99.02% / test_loss: 0.9224, test_acc: 98.26%
Epoch 230 - loss: 0.9143, acc: 99.08% / test_loss: 0.9267, test_acc: 97.83%
Epoch 231 - loss: 0.9155, acc: 98.98% / test_loss: 0.9230, test_acc: 98.19%
Epoch 232 - loss: 0.9147, acc: 99.00% / test_loss: 0.9255, test_acc: 97.92%
Epoch 233 - loss: 0.9162, acc: 98.86% / test_loss: 0.9237, test_acc: 98.10%
Epoch 234 - loss: 0.9145, acc: 99.05% / test_loss: 0.9247, test_acc: 98.01%
Epoch 235 - loss: 0.9153, acc: 98.98% / test_loss: 0.9227, test_acc: 98.24%
Epoch 236 - loss: 0.9148, acc: 99.01% / test_loss: 0.9228, test_acc: 98.20%
Epoch 237 - loss: 0.9142, acc: 99.09% / test_loss: 0.9219, test_acc: 98.30%
Epoch 238 - loss: 0.9141, acc: 99.09% / test_loss: 0.9251, test_acc: 97.99%
Epoch 239 - loss: 0.9150, acc: 99.00% / test_loss: 0.9227, test_acc: 98.21%
Epoch 240 - loss: 0.9142, acc: 99.07% / test_loss: 0.9240, test_acc: 98.10%
Epoch 241 - loss: 0.9153, acc: 98.97% / test_loss: 0.9244, test_acc: 98.08%
Epoch 242 - loss: 0.9150, acc: 99.01% / test_loss: 0.9263, test_acc: 97.84%
Epoch 243 - loss: 0.9140, acc: 99.10% / test_loss: 0.9266, test_acc: 97.82%
Epoch 244 - loss: 0.9140, acc: 99.09% / test_loss: 0.9228, test_acc: 98.18%
Epoch 245 - loss: 0.9157, acc: 98.92% / test_loss: 0.9246, test_acc: 98.01%
Epoch 246 - loss: 0.9162, acc: 98.85% / test_loss: 0.9229, test_acc: 98.21%
Epoch 247 - loss: 0.9143, acc: 99.04% / test_loss: 0.9243, test_acc: 98.08%
Epoch 248 - loss: 0.9141, acc: 99.08% / test_loss: 0.9224, test_acc: 98.26%
Epoch 249 - loss: 0.9138, acc: 99.11% / test_loss: 0.9227, test_acc: 98.23%
Epoch 250 - loss: 0.9151, acc: 98.97% / test_loss: 0.9213, test_acc: 98.38%
Epoch 251 - loss: 0.9140, acc: 99.12% / test_loss: 0.9230, test_acc: 98.21%
Epoch 252 - loss: 0.9139, acc: 99.10% / test_loss: 0.9260, test_acc: 97.88%
Epoch 253 - loss: 0.9161, acc: 98.84% / test_loss: 0.9253, test_acc: 97.99%
Epoch 254 - loss: 0.9144, acc: 99.05% / test_loss: 0.9220, test_acc: 98.28%
Epoch 255 - loss: 0.9136, acc: 99.13% / test_loss: 0.9222, test_acc: 98.26%
Epoch 256 - loss: 0.9147, acc: 99.03% / test_loss: 0.9221, test_acc: 98.27%
Epoch 257 - loss: 0.9136, acc: 99.12% / test_loss: 0.9233, test_acc: 98.15%
Epoch 258 - loss: 0.9138, acc: 99.11% / test_loss: 0.9222, test_acc: 98.28%
Epoch 259 - loss: 0.9134, acc: 99.14% / test_loss: 0.9225, test_acc: 98.22%
Epoch 260 - loss: 0.9135, acc: 99.13% / test_loss: 0.9218, test_acc: 98.33%
Epoch 261 - loss: 0.9145, acc: 99.05% / test_loss: 0.9230, test_acc: 98.16%
Epoch 262 - loss: 0.9134, acc: 99.15% / test_loss: 0.9229, test_acc: 98.21%
Epoch 263 - loss: 0.9146, acc: 99.03% / test_loss: 0.9267, test_acc: 97.81%
Epoch 264 - loss: 0.9150, acc: 98.99% / test_loss: 0.9225, test_acc: 98.24%
Epoch 265 - loss: 0.9142, acc: 99.06% / test_loss: 0.9244, test_acc: 98.02%
Epoch 266 - loss: 0.9153, acc: 98.96% / test_loss: 0.9229, test_acc: 98.20%
Epoch 267 - loss: 0.9141, acc: 99.08% / test_loss: 0.9241, test_acc: 98.07%
Epoch 268 - loss: 0.9136, acc: 99.14% / test_loss: 0.9234, test_acc: 98.14%
Epoch 269 - loss: 0.9145, acc: 99.04% / test_loss: 0.9229, test_acc: 98.20%
Epoch 270 - loss: 0.9144, acc: 99.03% / test_loss: 0.9234, test_acc: 98.10%
Epoch 271 - loss: 0.9151, acc: 98.98% / test_loss: 0.9228, test_acc: 98.23%
Epoch 272 - loss: 0.9144, acc: 99.05% / test_loss: 0.9226, test_acc: 98.24%
Epoch 273 - loss: 0.9145, acc: 99.03% / test_loss: 0.9228, test_acc: 98.20%
Epoch 274 - loss: 0.9134, acc: 99.15% / test_loss: 0.9215, test_acc: 98.34%
Epoch 275 - loss: 0.9127, acc: 99.22% / test_loss: 0.9237, test_acc: 98.09%
Epoch 276 - loss: 0.9141, acc: 99.09% / test_loss: 0.9222, test_acc: 98.29%
Epoch 277 - loss: 0.9125, acc: 99.24% / test_loss: 0.9224, test_acc: 98.23%
Epoch 278 - loss: 0.9145, acc: 99.03% / test_loss: 0.9233, test_acc: 98.14%
Epoch 279 - loss: 0.9163, acc: 98.84% / test_loss: 0.9229, test_acc: 98.20%
Epoch 280 - loss: 0.9141, acc: 99.10% / test_loss: 0.9236, test_acc: 98.11%
Epoch 281 - loss: 0.9138, acc: 99.11% / test_loss: 0.9225, test_acc: 98.22%
Epoch 282 - loss: 0.9138, acc: 99.12% / test_loss: 0.9218, test_acc: 98.29%
Epoch 283 - loss: 0.9133, acc: 99.17% / test_loss: 0.9242, test_acc: 98.07%
Epoch 284 - loss: 0.9143, acc: 99.04% / test_loss: 0.9230, test_acc: 98.18%
Epoch 285 - loss: 0.9144, acc: 99.04% / test_loss: 0.9221, test_acc: 98.27%
Epoch 286 - loss: 0.9137, acc: 99.12% / test_loss: 0.9292, test_acc: 97.60%
Epoch 287 - loss: 0.9138, acc: 99.11% / test_loss: 0.9249, test_acc: 98.04%
Epoch 288 - loss: 0.9138, acc: 99.11% / test_loss: 0.9268, test_acc: 97.82%
Epoch 289 - loss: 0.9132, acc: 99.18% / test_loss: 0.9233, test_acc: 98.13%
Epoch 290 - loss: 0.9135, acc: 99.12% / test_loss: 0.9228, test_acc: 98.21%
Epoch 291 - loss: 0.9129, acc: 99.20% / test_loss: 0.9226, test_acc: 98.25%
Epoch 292 - loss: 0.9135, acc: 99.15% / test_loss: 0.9240, test_acc: 98.05%
Epoch 293 - loss: 0.9128, acc: 99.23% / test_loss: 0.9233, test_acc: 98.14%
Epoch 294 - loss: 0.9162, acc: 98.88% / test_loss: 0.9252, test_acc: 97.95%
Epoch 295 - loss: 0.9149, acc: 99.00% / test_loss: 0.9230, test_acc: 98.20%
Epoch 296 - loss: 0.9141, acc: 99.07% / test_loss: 0.9225, test_acc: 98.20%
Epoch 297 - loss: 0.9133, acc: 99.15% / test_loss: 0.9230, test_acc: 98.21%
Epoch 298 - loss: 0.9128, acc: 99.20% / test_loss: 0.9223, test_acc: 98.23%
Epoch 299 - loss: 0.9124, acc: 99.25% / test_loss: 0.9227, test_acc: 98.23%
Epoch 300 - loss: 0.9126, acc: 99.22% / test_loss: 0.9219, test_acc: 98.28%
Epoch 301 - loss: 0.9126, acc: 99.23% / test_loss: 0.9221, test_acc: 98.28%
Epoch 302 - loss: 0.9146, acc: 99.00% / test_loss: 0.9222, test_acc: 98.25%
Epoch 303 - loss: 0.9142, acc: 99.07% / test_loss: 0.9218, test_acc: 98.30%
Epoch 304 - loss: 0.9141, acc: 99.08% / test_loss: 0.9213, test_acc: 98.35%
Epoch 305 - loss: 0.9134, acc: 99.16% / test_loss: 0.9214, test_acc: 98.33%
Epoch 306 - loss: 0.9130, acc: 99.18% / test_loss: 0.9225, test_acc: 98.23%
Epoch 307 - loss: 0.9141, acc: 99.08% / test_loss: 0.9226, test_acc: 98.22%
Epoch 308 - loss: 0.9136, acc: 99.13% / test_loss: 0.9229, test_acc: 98.18%
Epoch 309 - loss: 0.9140, acc: 99.06% / test_loss: 0.9223, test_acc: 98.27%
Epoch 310 - loss: 0.9131, acc: 99.17% / test_loss: 0.9278, test_acc: 97.67%
Epoch 311 - loss: 0.9135, acc: 99.12% / test_loss: 0.9224, test_acc: 98.26%
Epoch 312 - loss: 0.9148, acc: 99.02% / test_loss: 0.9233, test_acc: 98.12%
Epoch 313 - loss: 0.9143, acc: 99.06% / test_loss: 0.9230, test_acc: 98.15%
Epoch 314 - loss: 0.9131, acc: 99.20% / test_loss: 0.9239, test_acc: 98.08%
Epoch 315 - loss: 0.9142, acc: 99.06% / test_loss: 0.9245, test_acc: 98.02%
Epoch 316 - loss: 0.9128, acc: 99.21% / test_loss: 0.9223, test_acc: 98.27%
Epoch 317 - loss: 0.9135, acc: 99.12% / test_loss: 0.9220, test_acc: 98.27%
Epoch 318 - loss: 0.9140, acc: 99.10% / test_loss: 0.9229, test_acc: 98.20%
Epoch 319 - loss: 0.9139, acc: 99.09% / test_loss: 0.9226, test_acc: 98.20%
Epoch 320 - loss: 0.9132, acc: 99.17% / test_loss: 0.9222, test_acc: 98.26%
Epoch 321 - loss: 0.9139, acc: 99.09% / test_loss: 0.9227, test_acc: 98.18%
Epoch 322 - loss: 0.9134, acc: 99.13% / test_loss: 0.9219, test_acc: 98.29%
Epoch 323 - loss: 0.9141, acc: 99.06% / test_loss: 0.9240, test_acc: 98.07%
Epoch 324 - loss: 0.9138, acc: 99.08% / test_loss: 0.9219, test_acc: 98.30%
Epoch 325 - loss: 0.9141, acc: 99.06% / test_loss: 0.9224, test_acc: 98.26%
Epoch 326 - loss: 0.9127, acc: 99.21% / test_loss: 0.9219, test_acc: 98.29%
Epoch 327 - loss: 0.9127, acc: 99.22% / test_loss: 0.9233, test_acc: 98.14%
Epoch 328 - loss: 0.9131, acc: 99.18% / test_loss: 0.9212, test_acc: 98.36%
Epoch 329 - loss: 0.9130, acc: 99.18% / test_loss: 0.9225, test_acc: 98.21%
Epoch 330 - loss: 0.9123, acc: 99.26% / test_loss: 0.9216, test_acc: 98.32%
Epoch 331 - loss: 0.9128, acc: 99.21% / test_loss: 0.9238, test_acc: 98.09%
Epoch 332 - loss: 0.9131, acc: 99.18% / test_loss: 0.9217, test_acc: 98.32%
Epoch 333 - loss: 0.9146, acc: 99.03% / test_loss: 0.9226, test_acc: 98.22%
Epoch 334 - loss: 0.9153, acc: 98.95% / test_loss: 0.9242, test_acc: 98.04%
Epoch 335 - loss: 0.9140, acc: 99.09% / test_loss: 0.9248, test_acc: 97.98%
Epoch 336 - loss: 0.9139, acc: 99.08% / test_loss: 0.9218, test_acc: 98.29%
Epoch 337 - loss: 0.9123, acc: 99.24% / test_loss: 0.9208, test_acc: 98.39%
Epoch 338 - loss: 0.9130, acc: 99.18% / test_loss: 0.9218, test_acc: 98.29%
Epoch 339 - loss: 0.9122, acc: 99.27% / test_loss: 0.9259, test_acc: 97.90%
Epoch 340 - loss: 0.9124, acc: 99.25% / test_loss: 0.9221, test_acc: 98.24%
Epoch 341 - loss: 0.9124, acc: 99.25% / test_loss: 0.9231, test_acc: 98.17%
Epoch 342 - loss: 0.9131, acc: 99.17% / test_loss: 0.9220, test_acc: 98.26%
Epoch 343 - loss: 0.9133, acc: 99.16% / test_loss: 0.9229, test_acc: 98.20%
Epoch 344 - loss: 0.9128, acc: 99.21% / test_loss: 0.9218, test_acc: 98.29%
Epoch 345 - loss: 0.9135, acc: 99.14% / test_loss: 0.9249, test_acc: 98.02%
Epoch 346 - loss: 0.9133, acc: 99.17% / test_loss: 0.9226, test_acc: 98.23%
Epoch 347 - loss: 0.9128, acc: 99.22% / test_loss: 0.9219, test_acc: 98.28%
Epoch 348 - loss: 0.9122, acc: 99.26% / test_loss: 0.9227, test_acc: 98.21%
Epoch 349 - loss: 0.9138, acc: 99.10% / test_loss: 0.9215, test_acc: 98.32%
Epoch 350 - loss: 0.9127, acc: 99.21% / test_loss: 0.9235, test_acc: 98.13%
Epoch 351 - loss: 0.9143, acc: 99.05% / test_loss: 0.9222, test_acc: 98.24%
Epoch 352 - loss: 0.9141, acc: 99.07% / test_loss: 0.9222, test_acc: 98.25%
Epoch 353 - loss: 0.9126, acc: 99.22% / test_loss: 0.9202, test_acc: 98.47%
Epoch 354 - loss: 0.9125, acc: 99.24% / test_loss: 0.9204, test_acc: 98.44%
Epoch 355 - loss: 0.9122, acc: 99.27% / test_loss: 0.9208, test_acc: 98.41%
Epoch 356 - loss: 0.9133, acc: 99.16% / test_loss: 0.9231, test_acc: 98.17%
Epoch 357 - loss: 0.9146, acc: 99.02% / test_loss: 0.9233, test_acc: 98.14%
Epoch 358 - loss: 0.9133, acc: 99.16% / test_loss: 0.9206, test_acc: 98.42%
Epoch 359 - loss: 0.9123, acc: 99.24% / test_loss: 0.9228, test_acc: 98.22%
Epoch 360 - loss: 0.9128, acc: 99.20% / test_loss: 0.9215, test_acc: 98.35%
Epoch 361 - loss: 0.9124, acc: 99.24% / test_loss: 0.9210, test_acc: 98.38%
Epoch 362 - loss: 0.9124, acc: 99.24% / test_loss: 0.9212, test_acc: 98.36%
Epoch 363 - loss: 0.9133, acc: 99.15% / test_loss: 0.9229, test_acc: 98.18%
Epoch 364 - loss: 0.9139, acc: 99.09% / test_loss: 0.9213, test_acc: 98.36%
Epoch 365 - loss: 0.9132, acc: 99.15% / test_loss: 0.9214, test_acc: 98.35%
Epoch 366 - loss: 0.9128, acc: 99.19% / test_loss: 0.9229, test_acc: 98.19%
Epoch 367 - loss: 0.9122, acc: 99.27% / test_loss: 0.9216, test_acc: 98.32%
Epoch 368 - loss: 0.9142, acc: 99.06% / test_loss: 0.9222, test_acc: 98.24%
Epoch 369 - loss: 0.9133, acc: 99.14% / test_loss: 0.9222, test_acc: 98.24%
Epoch 370 - loss: 0.9126, acc: 99.22% / test_loss: 0.9215, test_acc: 98.31%
Epoch 371 - loss: 0.9122, acc: 99.27% / test_loss: 0.9212, test_acc: 98.37%
Epoch 372 - loss: 0.9132, acc: 99.14% / test_loss: 0.9226, test_acc: 98.20%
Epoch 373 - loss: 0.9125, acc: 99.22% / test_loss: 0.9216, test_acc: 98.29%
Epoch 374 - loss: 0.9131, acc: 99.18% / test_loss: 0.9231, test_acc: 98.17%
Epoch 375 - loss: 0.9128, acc: 99.21% / test_loss: 0.9229, test_acc: 98.19%
Epoch 376 - loss: 0.9142, acc: 99.07% / test_loss: 0.9214, test_acc: 98.32%
Epoch 377 - loss: 0.9120, acc: 99.29% / test_loss: 0.9215, test_acc: 98.33%
Epoch 378 - loss: 0.9119, acc: 99.30% / test_loss: 0.9213, test_acc: 98.35%
Epoch 379 - loss: 0.9120, acc: 99.29% / test_loss: 0.9252, test_acc: 97.95%
Epoch 380 - loss: 0.9139, acc: 99.10% / test_loss: 0.9264, test_acc: 97.88%
Epoch 381 - loss: 0.9144, acc: 99.06% / test_loss: 0.9271, test_acc: 97.73%
Epoch 382 - loss: 0.9142, acc: 99.09% / test_loss: 0.9222, test_acc: 98.23%
Epoch 383 - loss: 0.9125, acc: 99.23% / test_loss: 0.9243, test_acc: 98.04%
Epoch 384 - loss: 0.9137, acc: 99.12% / test_loss: 0.9224, test_acc: 98.26%
Epoch 385 - loss: 0.9127, acc: 99.21% / test_loss: 0.9214, test_acc: 98.33%
Epoch 386 - loss: 0.9121, acc: 99.28% / test_loss: 0.9210, test_acc: 98.39%
Epoch 387 - loss: 0.9133, acc: 99.15% / test_loss: 0.9221, test_acc: 98.25%
Epoch 388 - loss: 0.9125, acc: 99.24% / test_loss: 0.9236, test_acc: 98.12%
Epoch 389 - loss: 0.9125, acc: 99.23% / test_loss: 0.9217, test_acc: 98.32%
Epoch 390 - loss: 0.9126, acc: 99.23% / test_loss: 0.9231, test_acc: 98.17%
Epoch 391 - loss: 0.9148, acc: 99.00% / test_loss: 0.9216, test_acc: 98.31%
Epoch 392 - loss: 0.9143, acc: 99.06% / test_loss: 0.9220, test_acc: 98.26%
Epoch 393 - loss: 0.9126, acc: 99.24% / test_loss: 0.9249, test_acc: 97.99%
Epoch 394 - loss: 0.9138, acc: 99.09% / test_loss: 0.9238, test_acc: 98.10%
Epoch 395 - loss: 0.9125, acc: 99.24% / test_loss: 0.9217, test_acc: 98.34%
Epoch 396 - loss: 0.9128, acc: 99.20% / test_loss: 0.9220, test_acc: 98.26%
Epoch 397 - loss: 0.9124, acc: 99.24% / test_loss: 0.9233, test_acc: 98.14%
Epoch 398 - loss: 0.9125, acc: 99.24% / test_loss: 0.9214, test_acc: 98.35%
Epoch 399 - loss: 0.9120, acc: 99.28% / test_loss: 0.9236, test_acc: 98.13%
Epoch 400 - loss: 0.9145, acc: 99.02% / test_loss: 0.9262, test_acc: 97.85%
Best test accuracy 98.47% in epoch 353.
----------------------------------------------------------------------------------------------------
Run 3
Epoch 1 - loss: 1.3517, acc: 56.00% / test_loss: 1.1935, test_acc: 71.15%
Epoch 2 - loss: 1.1647, acc: 74.74% / test_loss: 1.1107, test_acc: 79.91%
Epoch 3 - loss: 1.1051, acc: 80.39% / test_loss: 1.0854, test_acc: 82.13%
Epoch 4 - loss: 1.0872, acc: 81.94% / test_loss: 1.0485, test_acc: 86.53%
Epoch 5 - loss: 1.0449, acc: 86.36% / test_loss: 1.0433, test_acc: 86.27%
Epoch 6 - loss: 1.0351, acc: 87.11% / test_loss: 1.0256, test_acc: 88.18%
Epoch 7 - loss: 1.0303, acc: 87.51% / test_loss: 1.0206, test_acc: 88.44%
Epoch 8 - loss: 1.0269, acc: 87.78% / test_loss: 1.0137, test_acc: 89.04%
Epoch 9 - loss: 1.0237, acc: 88.12% / test_loss: 1.0146, test_acc: 89.13%
Epoch 10 - loss: 1.0217, acc: 88.25% / test_loss: 1.0112, test_acc: 89.19%
Epoch 11 - loss: 1.0194, acc: 88.44% / test_loss: 1.0104, test_acc: 89.28%
Epoch 12 - loss: 1.0194, acc: 88.43% / test_loss: 1.0121, test_acc: 89.14%
Epoch 13 - loss: 1.0182, acc: 88.49% / test_loss: 1.0102, test_acc: 89.25%
Epoch 14 - loss: 1.0161, acc: 88.77% / test_loss: 1.0173, test_acc: 88.81%
Epoch 15 - loss: 1.0155, acc: 88.83% / test_loss: 1.0093, test_acc: 89.42%
Epoch 16 - loss: 1.0140, acc: 88.91% / test_loss: 1.0057, test_acc: 89.73%
Epoch 17 - loss: 1.0125, acc: 89.08% / test_loss: 1.0064, test_acc: 89.63%
Epoch 18 - loss: 1.0137, acc: 88.98% / test_loss: 1.0067, test_acc: 89.68%
Epoch 19 - loss: 1.0126, acc: 89.05% / test_loss: 1.0049, test_acc: 89.73%
Epoch 20 - loss: 1.0106, acc: 89.21% / test_loss: 1.0070, test_acc: 89.56%
Epoch 21 - loss: 1.0090, acc: 89.40% / test_loss: 1.0059, test_acc: 89.77%
Epoch 22 - loss: 1.0083, acc: 89.48% / test_loss: 1.0043, test_acc: 89.90%
Epoch 23 - loss: 1.0061, acc: 89.67% / test_loss: 1.0050, test_acc: 89.84%
Epoch 24 - loss: 1.0062, acc: 89.69% / test_loss: 1.0048, test_acc: 90.07%
Epoch 25 - loss: 1.0056, acc: 89.78% / test_loss: 1.0065, test_acc: 89.82%
Epoch 26 - loss: 1.0051, acc: 89.85% / test_loss: 1.0011, test_acc: 90.17%
Epoch 27 - loss: 1.0031, acc: 90.03% / test_loss: 1.0000, test_acc: 90.37%
Epoch 28 - loss: 1.0030, acc: 90.03% / test_loss: 1.0003, test_acc: 90.24%
Epoch 29 - loss: 1.0034, acc: 89.96% / test_loss: 0.9998, test_acc: 90.31%
Epoch 30 - loss: 1.0021, acc: 90.03% / test_loss: 1.0021, test_acc: 90.15%
Epoch 31 - loss: 1.0010, acc: 90.20% / test_loss: 1.0003, test_acc: 90.31%
Epoch 32 - loss: 1.0012, acc: 90.13% / test_loss: 0.9984, test_acc: 90.46%
Epoch 33 - loss: 0.9987, acc: 90.43% / test_loss: 0.9978, test_acc: 90.44%
Epoch 34 - loss: 0.9994, acc: 90.35% / test_loss: 0.9986, test_acc: 90.34%
Epoch 35 - loss: 0.9996, acc: 90.28% / test_loss: 1.0004, test_acc: 90.23%
Epoch 36 - loss: 0.9989, acc: 90.45% / test_loss: 0.9948, test_acc: 90.73%
Epoch 37 - loss: 0.9942, acc: 90.86% / test_loss: 0.9982, test_acc: 90.51%
Epoch 38 - loss: 0.9903, acc: 91.29% / test_loss: 0.9892, test_acc: 91.32%
Epoch 39 - loss: 0.9892, acc: 91.38% / test_loss: 0.9887, test_acc: 91.40%
Epoch 40 - loss: 0.9868, acc: 91.59% / test_loss: 0.9838, test_acc: 91.89%
Epoch 41 - loss: 0.9822, acc: 92.07% / test_loss: 0.9821, test_acc: 91.99%
Epoch 42 - loss: 0.9797, acc: 92.26% / test_loss: 0.9812, test_acc: 92.07%
Epoch 43 - loss: 0.9804, acc: 92.21% / test_loss: 0.9818, test_acc: 92.01%
Epoch 44 - loss: 0.9784, acc: 92.36% / test_loss: 0.9846, test_acc: 91.71%
Epoch 45 - loss: 0.9777, acc: 92.48% / test_loss: 0.9810, test_acc: 92.13%
Epoch 46 - loss: 0.9795, acc: 92.36% / test_loss: 0.9839, test_acc: 91.88%
Epoch 47 - loss: 0.9772, acc: 92.50% / test_loss: 0.9789, test_acc: 92.27%
Epoch 48 - loss: 0.9769, acc: 92.51% / test_loss: 0.9795, test_acc: 92.28%
Epoch 49 - loss: 0.9760, acc: 92.70% / test_loss: 0.9790, test_acc: 92.32%
Epoch 50 - loss: 0.9753, acc: 92.71% / test_loss: 0.9781, test_acc: 92.42%
Epoch 51 - loss: 0.9764, acc: 92.56% / test_loss: 0.9812, test_acc: 92.22%
Epoch 52 - loss: 0.9750, acc: 92.75% / test_loss: 0.9802, test_acc: 92.22%
Epoch 53 - loss: 0.9750, acc: 92.71% / test_loss: 0.9785, test_acc: 92.34%
Epoch 54 - loss: 0.9759, acc: 92.65% / test_loss: 0.9760, test_acc: 92.61%
Epoch 55 - loss: 0.9731, acc: 92.92% / test_loss: 0.9763, test_acc: 92.58%
Epoch 56 - loss: 0.9748, acc: 92.75% / test_loss: 0.9784, test_acc: 92.37%
Epoch 57 - loss: 0.9729, acc: 92.94% / test_loss: 0.9771, test_acc: 92.50%
Epoch 58 - loss: 0.9728, acc: 92.93% / test_loss: 0.9756, test_acc: 92.61%
Epoch 59 - loss: 0.9719, acc: 93.02% / test_loss: 0.9750, test_acc: 92.72%
Epoch 60 - loss: 0.9716, acc: 93.05% / test_loss: 0.9767, test_acc: 92.65%
Epoch 61 - loss: 0.9719, acc: 93.02% / test_loss: 0.9760, test_acc: 92.62%
Epoch 62 - loss: 0.9723, acc: 92.95% / test_loss: 0.9747, test_acc: 92.74%
Epoch 63 - loss: 0.9722, acc: 92.96% / test_loss: 0.9799, test_acc: 92.34%
Epoch 64 - loss: 0.9718, acc: 93.03% / test_loss: 0.9749, test_acc: 92.70%
Epoch 65 - loss: 0.9721, acc: 93.02% / test_loss: 0.9754, test_acc: 92.68%
Epoch 66 - loss: 0.9704, acc: 93.15% / test_loss: 0.9755, test_acc: 92.60%
Epoch 67 - loss: 0.9716, acc: 93.02% / test_loss: 0.9751, test_acc: 92.69%
Epoch 68 - loss: 0.9700, acc: 93.16% / test_loss: 0.9744, test_acc: 92.72%
Epoch 69 - loss: 0.9705, acc: 93.11% / test_loss: 0.9754, test_acc: 92.64%
Epoch 70 - loss: 0.9714, acc: 93.06% / test_loss: 0.9725, test_acc: 92.89%
Epoch 71 - loss: 0.9703, acc: 93.13% / test_loss: 0.9732, test_acc: 92.86%
Epoch 72 - loss: 0.9696, acc: 93.23% / test_loss: 0.9817, test_acc: 92.01%
Epoch 73 - loss: 0.9707, acc: 93.11% / test_loss: 0.9734, test_acc: 92.80%
Epoch 74 - loss: 0.9705, acc: 93.14% / test_loss: 0.9759, test_acc: 92.63%
Epoch 75 - loss: 0.9710, acc: 93.10% / test_loss: 0.9734, test_acc: 92.81%
Epoch 76 - loss: 0.9688, acc: 93.28% / test_loss: 0.9723, test_acc: 92.96%
Epoch 77 - loss: 0.9703, acc: 93.12% / test_loss: 0.9750, test_acc: 92.62%
Epoch 78 - loss: 0.9473, acc: 95.63% / test_loss: 0.9387, test_acc: 96.71%
Epoch 79 - loss: 0.9275, acc: 97.82% / test_loss: 0.9339, test_acc: 97.11%
Epoch 80 - loss: 0.9256, acc: 97.96% / test_loss: 0.9311, test_acc: 97.41%
Epoch 81 - loss: 0.9263, acc: 97.89% / test_loss: 0.9312, test_acc: 97.38%
Epoch 82 - loss: 0.9270, acc: 97.82% / test_loss: 0.9313, test_acc: 97.35%
Epoch 83 - loss: 0.9276, acc: 97.78% / test_loss: 0.9316, test_acc: 97.30%
Epoch 84 - loss: 0.9250, acc: 98.01% / test_loss: 0.9337, test_acc: 97.05%
Epoch 85 - loss: 0.9232, acc: 98.18% / test_loss: 0.9301, test_acc: 97.46%
Epoch 86 - loss: 0.9246, acc: 98.04% / test_loss: 0.9308, test_acc: 97.41%
Epoch 87 - loss: 0.9249, acc: 98.02% / test_loss: 0.9339, test_acc: 97.09%
Epoch 88 - loss: 0.9264, acc: 97.87% / test_loss: 0.9301, test_acc: 97.46%
Epoch 89 - loss: 0.9254, acc: 97.97% / test_loss: 0.9323, test_acc: 97.27%
Epoch 90 - loss: 0.9244, acc: 98.07% / test_loss: 0.9327, test_acc: 97.19%
Epoch 91 - loss: 0.9251, acc: 98.00% / test_loss: 0.9290, test_acc: 97.58%
Epoch 92 - loss: 0.9239, acc: 98.10% / test_loss: 0.9293, test_acc: 97.52%
Epoch 93 - loss: 0.9240, acc: 98.14% / test_loss: 0.9315, test_acc: 97.33%
Epoch 94 - loss: 0.9243, acc: 98.04% / test_loss: 0.9331, test_acc: 97.18%
Epoch 95 - loss: 0.9249, acc: 98.01% / test_loss: 0.9292, test_acc: 97.54%
Epoch 96 - loss: 0.9255, acc: 97.96% / test_loss: 0.9322, test_acc: 97.24%
Epoch 97 - loss: 0.9249, acc: 97.98% / test_loss: 0.9296, test_acc: 97.55%
Epoch 98 - loss: 0.9213, acc: 98.36% / test_loss: 0.9275, test_acc: 97.74%
Epoch 99 - loss: 0.9206, acc: 98.45% / test_loss: 0.9298, test_acc: 97.51%
Epoch 100 - loss: 0.9207, acc: 98.45% / test_loss: 0.9284, test_acc: 97.59%
Epoch 101 - loss: 0.9189, acc: 98.64% / test_loss: 0.9270, test_acc: 97.83%
Epoch 102 - loss: 0.9234, acc: 98.10% / test_loss: 0.9309, test_acc: 97.37%
Epoch 103 - loss: 0.9195, acc: 98.54% / test_loss: 0.9269, test_acc: 97.79%
Epoch 104 - loss: 0.9208, acc: 98.43% / test_loss: 0.9264, test_acc: 97.82%
Epoch 105 - loss: 0.9216, acc: 98.33% / test_loss: 0.9275, test_acc: 97.73%
Epoch 106 - loss: 0.9203, acc: 98.44% / test_loss: 0.9273, test_acc: 97.80%
Epoch 107 - loss: 0.9190, acc: 98.60% / test_loss: 0.9251, test_acc: 97.95%
Epoch 108 - loss: 0.9194, acc: 98.57% / test_loss: 0.9262, test_acc: 97.88%
Epoch 109 - loss: 0.9197, acc: 98.51% / test_loss: 0.9253, test_acc: 97.98%
Epoch 110 - loss: 0.9201, acc: 98.51% / test_loss: 0.9321, test_acc: 97.27%
Epoch 111 - loss: 0.9195, acc: 98.56% / test_loss: 0.9281, test_acc: 97.67%
Epoch 112 - loss: 0.9203, acc: 98.49% / test_loss: 0.9276, test_acc: 97.80%
Epoch 113 - loss: 0.9208, acc: 98.41% / test_loss: 0.9309, test_acc: 97.41%
Epoch 114 - loss: 0.9206, acc: 98.41% / test_loss: 0.9268, test_acc: 97.80%
Epoch 115 - loss: 0.9192, acc: 98.57% / test_loss: 0.9258, test_acc: 97.90%
Epoch 116 - loss: 0.9200, acc: 98.47% / test_loss: 0.9429, test_acc: 96.19%
Epoch 117 - loss: 0.9197, acc: 98.54% / test_loss: 0.9276, test_acc: 97.72%
Epoch 118 - loss: 0.9191, acc: 98.60% / test_loss: 0.9270, test_acc: 97.78%
Epoch 119 - loss: 0.9198, acc: 98.51% / test_loss: 0.9265, test_acc: 97.83%
Epoch 120 - loss: 0.9186, acc: 98.65% / test_loss: 0.9258, test_acc: 97.88%
Epoch 121 - loss: 0.9190, acc: 98.60% / test_loss: 0.9290, test_acc: 97.59%
Epoch 122 - loss: 0.9182, acc: 98.64% / test_loss: 0.9255, test_acc: 97.93%
Epoch 123 - loss: 0.9174, acc: 98.76% / test_loss: 0.9245, test_acc: 98.01%
Epoch 124 - loss: 0.9176, acc: 98.73% / test_loss: 0.9259, test_acc: 97.89%
Epoch 125 - loss: 0.9179, acc: 98.71% / test_loss: 0.9267, test_acc: 97.81%
Epoch 126 - loss: 0.9186, acc: 98.67% / test_loss: 0.9288, test_acc: 97.60%
Epoch 127 - loss: 0.9177, acc: 98.73% / test_loss: 0.9254, test_acc: 97.94%
Epoch 128 - loss: 0.9183, acc: 98.66% / test_loss: 0.9249, test_acc: 97.98%
Epoch 129 - loss: 0.9182, acc: 98.66% / test_loss: 0.9241, test_acc: 98.10%
Epoch 130 - loss: 0.9183, acc: 98.66% / test_loss: 0.9268, test_acc: 97.83%
Epoch 131 - loss: 0.9177, acc: 98.73% / test_loss: 0.9282, test_acc: 97.64%
Epoch 132 - loss: 0.9175, acc: 98.74% / test_loss: 0.9254, test_acc: 97.92%
Epoch 133 - loss: 0.9182, acc: 98.67% / test_loss: 0.9261, test_acc: 97.85%
Epoch 134 - loss: 0.9181, acc: 98.72% / test_loss: 0.9261, test_acc: 97.93%
Epoch 135 - loss: 0.9181, acc: 98.68% / test_loss: 0.9269, test_acc: 97.82%
Epoch 136 - loss: 0.9178, acc: 98.72% / test_loss: 0.9250, test_acc: 97.93%
Epoch 137 - loss: 0.9164, acc: 98.85% / test_loss: 0.9244, test_acc: 98.01%
Epoch 138 - loss: 0.9180, acc: 98.69% / test_loss: 0.9259, test_acc: 97.88%
Epoch 139 - loss: 0.9167, acc: 98.81% / test_loss: 0.9254, test_acc: 97.92%
Epoch 140 - loss: 0.9189, acc: 98.64% / test_loss: 0.9290, test_acc: 97.56%
Epoch 141 - loss: 0.9181, acc: 98.69% / test_loss: 0.9271, test_acc: 97.77%
Epoch 142 - loss: 0.9198, acc: 98.48% / test_loss: 0.9237, test_acc: 98.10%
Epoch 143 - loss: 0.9170, acc: 98.81% / test_loss: 0.9269, test_acc: 97.80%
Epoch 144 - loss: 0.9179, acc: 98.72% / test_loss: 0.9248, test_acc: 98.02%
Epoch 145 - loss: 0.9170, acc: 98.80% / test_loss: 0.9257, test_acc: 97.91%
Epoch 146 - loss: 0.9175, acc: 98.76% / test_loss: 0.9268, test_acc: 97.81%
Epoch 147 - loss: 0.9175, acc: 98.77% / test_loss: 0.9256, test_acc: 97.94%
Epoch 148 - loss: 0.9166, acc: 98.84% / test_loss: 0.9237, test_acc: 98.10%
Epoch 149 - loss: 0.9167, acc: 98.82% / test_loss: 0.9244, test_acc: 98.03%
Epoch 150 - loss: 0.9160, acc: 98.88% / test_loss: 0.9240, test_acc: 98.08%
Epoch 151 - loss: 0.9166, acc: 98.82% / test_loss: 0.9250, test_acc: 97.99%
Epoch 152 - loss: 0.9180, acc: 98.69% / test_loss: 0.9248, test_acc: 98.00%
Epoch 153 - loss: 0.9171, acc: 98.78% / test_loss: 0.9235, test_acc: 98.14%
Epoch 154 - loss: 0.9170, acc: 98.78% / test_loss: 0.9235, test_acc: 98.09%
Epoch 155 - loss: 0.9169, acc: 98.78% / test_loss: 0.9295, test_acc: 97.54%
Epoch 156 - loss: 0.9190, acc: 98.60% / test_loss: 0.9281, test_acc: 97.68%
Epoch 157 - loss: 0.9174, acc: 98.74% / test_loss: 0.9252, test_acc: 97.93%
Epoch 158 - loss: 0.9165, acc: 98.86% / test_loss: 0.9256, test_acc: 97.90%
Epoch 159 - loss: 0.9172, acc: 98.75% / test_loss: 0.9259, test_acc: 97.88%
Epoch 160 - loss: 0.9167, acc: 98.83% / test_loss: 0.9258, test_acc: 97.94%
Epoch 161 - loss: 0.9178, acc: 98.71% / test_loss: 0.9249, test_acc: 97.99%
Epoch 162 - loss: 0.9160, acc: 98.90% / test_loss: 0.9258, test_acc: 97.93%
Epoch 163 - loss: 0.9164, acc: 98.84% / test_loss: 0.9249, test_acc: 97.98%
Epoch 164 - loss: 0.9165, acc: 98.84% / test_loss: 0.9244, test_acc: 98.02%
Epoch 165 - loss: 0.9171, acc: 98.76% / test_loss: 0.9242, test_acc: 98.08%
Epoch 166 - loss: 0.9166, acc: 98.84% / test_loss: 0.9257, test_acc: 97.91%
Epoch 167 - loss: 0.9160, acc: 98.91% / test_loss: 0.9237, test_acc: 98.10%
Epoch 168 - loss: 0.9159, acc: 98.88% / test_loss: 0.9240, test_acc: 98.11%
Epoch 169 - loss: 0.9173, acc: 98.78% / test_loss: 0.9261, test_acc: 97.87%
Epoch 170 - loss: 0.9172, acc: 98.76% / test_loss: 0.9252, test_acc: 97.92%
Epoch 171 - loss: 0.9157, acc: 98.92% / test_loss: 0.9232, test_acc: 98.16%
Epoch 172 - loss: 0.9159, acc: 98.89% / test_loss: 0.9239, test_acc: 98.05%
Epoch 173 - loss: 0.9152, acc: 98.96% / test_loss: 0.9245, test_acc: 98.01%
Epoch 174 - loss: 0.9177, acc: 98.73% / test_loss: 0.9280, test_acc: 97.67%
Epoch 175 - loss: 0.9171, acc: 98.77% / test_loss: 0.9258, test_acc: 97.95%
Epoch 176 - loss: 0.9177, acc: 98.72% / test_loss: 0.9306, test_acc: 97.42%
Epoch 177 - loss: 0.9172, acc: 98.75% / test_loss: 0.9248, test_acc: 98.02%
Epoch 178 - loss: 0.9173, acc: 98.75% / test_loss: 0.9259, test_acc: 97.89%
Epoch 179 - loss: 0.9159, acc: 98.90% / test_loss: 0.9256, test_acc: 97.91%
Epoch 180 - loss: 0.9161, acc: 98.89% / test_loss: 0.9242, test_acc: 98.07%
Epoch 181 - loss: 0.9151, acc: 98.97% / test_loss: 0.9231, test_acc: 98.17%
Epoch 182 - loss: 0.9147, acc: 99.03% / test_loss: 0.9246, test_acc: 98.01%
Epoch 183 - loss: 0.9154, acc: 98.96% / test_loss: 0.9328, test_acc: 97.21%
Epoch 184 - loss: 0.9167, acc: 98.81% / test_loss: 0.9248, test_acc: 97.99%
Epoch 185 - loss: 0.9175, acc: 98.72% / test_loss: 0.9246, test_acc: 97.99%
Epoch 186 - loss: 0.9161, acc: 98.88% / test_loss: 0.9246, test_acc: 97.98%
Epoch 187 - loss: 0.9166, acc: 98.83% / test_loss: 0.9255, test_acc: 97.89%
Epoch 188 - loss: 0.9153, acc: 98.97% / test_loss: 0.9252, test_acc: 97.96%
Epoch 189 - loss: 0.9168, acc: 98.82% / test_loss: 0.9244, test_acc: 98.04%
Epoch 190 - loss: 0.9172, acc: 98.74% / test_loss: 0.9275, test_acc: 97.76%
Epoch 191 - loss: 0.9156, acc: 98.94% / test_loss: 0.9236, test_acc: 98.13%
Epoch 192 - loss: 0.9156, acc: 98.93% / test_loss: 0.9237, test_acc: 98.11%
Epoch 193 - loss: 0.9162, acc: 98.85% / test_loss: 0.9283, test_acc: 97.61%
Epoch 194 - loss: 0.9182, acc: 98.66% / test_loss: 0.9276, test_acc: 97.68%
Epoch 195 - loss: 0.9167, acc: 98.83% / test_loss: 0.9235, test_acc: 98.13%
Epoch 196 - loss: 0.9148, acc: 99.00% / test_loss: 0.9229, test_acc: 98.21%
Epoch 197 - loss: 0.9167, acc: 98.81% / test_loss: 0.9301, test_acc: 97.46%
Epoch 198 - loss: 0.9159, acc: 98.91% / test_loss: 0.9263, test_acc: 97.86%
Epoch 199 - loss: 0.9159, acc: 98.90% / test_loss: 0.9239, test_acc: 98.06%
Epoch 200 - loss: 0.9156, acc: 98.94% / test_loss: 0.9224, test_acc: 98.24%
Epoch 201 - loss: 0.9157, acc: 98.94% / test_loss: 0.9266, test_acc: 97.79%
Epoch 202 - loss: 0.9172, acc: 98.75% / test_loss: 0.9257, test_acc: 97.92%
Epoch 203 - loss: 0.9164, acc: 98.85% / test_loss: 0.9241, test_acc: 98.10%
Epoch 204 - loss: 0.9157, acc: 98.93% / test_loss: 0.9237, test_acc: 98.09%
Epoch 205 - loss: 0.9159, acc: 98.89% / test_loss: 0.9236, test_acc: 98.14%
Epoch 206 - loss: 0.9157, acc: 98.92% / test_loss: 0.9245, test_acc: 98.01%
Epoch 207 - loss: 0.9152, acc: 98.97% / test_loss: 0.9246, test_acc: 98.02%
Epoch 208 - loss: 0.9186, acc: 98.61% / test_loss: 0.9411, test_acc: 96.40%
Epoch 209 - loss: 0.9173, acc: 98.76% / test_loss: 0.9238, test_acc: 98.11%
Epoch 210 - loss: 0.9158, acc: 98.91% / test_loss: 0.9228, test_acc: 98.23%
Epoch 211 - loss: 0.9155, acc: 98.94% / test_loss: 0.9248, test_acc: 97.98%
Epoch 212 - loss: 0.9165, acc: 98.84% / test_loss: 0.9238, test_acc: 98.08%
Epoch 213 - loss: 0.9172, acc: 98.78% / test_loss: 0.9248, test_acc: 98.01%
Epoch 214 - loss: 0.9155, acc: 98.94% / test_loss: 0.9246, test_acc: 98.02%
Epoch 215 - loss: 0.9157, acc: 98.91% / test_loss: 0.9252, test_acc: 97.95%
Epoch 216 - loss: 0.9145, acc: 99.05% / test_loss: 0.9234, test_acc: 98.13%
Epoch 217 - loss: 0.9149, acc: 98.99% / test_loss: 0.9237, test_acc: 98.09%
Epoch 218 - loss: 0.9157, acc: 98.91% / test_loss: 0.9254, test_acc: 97.92%
Epoch 219 - loss: 0.9151, acc: 98.98% / test_loss: 0.9226, test_acc: 98.23%
Epoch 220 - loss: 0.9146, acc: 99.02% / test_loss: 0.9234, test_acc: 98.14%
Epoch 221 - loss: 0.9143, acc: 99.06% / test_loss: 0.9231, test_acc: 98.17%
Epoch 222 - loss: 0.9149, acc: 99.00% / test_loss: 0.9232, test_acc: 98.18%
Epoch 223 - loss: 0.9168, acc: 98.78% / test_loss: 0.9263, test_acc: 97.85%
Epoch 224 - loss: 0.9176, acc: 98.72% / test_loss: 0.9234, test_acc: 98.14%
Epoch 225 - loss: 0.9161, acc: 98.89% / test_loss: 0.9239, test_acc: 98.07%
Epoch 226 - loss: 0.9154, acc: 98.95% / test_loss: 0.9237, test_acc: 98.10%
Epoch 227 - loss: 0.9152, acc: 98.98% / test_loss: 0.9246, test_acc: 98.01%
Epoch 228 - loss: 0.9158, acc: 98.90% / test_loss: 0.9244, test_acc: 98.10%
Epoch 229 - loss: 0.9155, acc: 98.94% / test_loss: 0.9242, test_acc: 98.04%
Epoch 230 - loss: 0.9148, acc: 99.03% / test_loss: 0.9242, test_acc: 98.05%
Epoch 231 - loss: 0.9161, acc: 98.87% / test_loss: 0.9268, test_acc: 97.80%
Epoch 232 - loss: 0.9145, acc: 99.05% / test_loss: 0.9220, test_acc: 98.27%
Epoch 233 - loss: 0.9139, acc: 99.09% / test_loss: 0.9222, test_acc: 98.24%
Epoch 234 - loss: 0.9143, acc: 99.06% / test_loss: 0.9231, test_acc: 98.16%
Epoch 235 - loss: 0.9155, acc: 98.93% / test_loss: 0.9265, test_acc: 97.83%
Epoch 236 - loss: 0.9171, acc: 98.77% / test_loss: 0.9223, test_acc: 98.23%
Epoch 237 - loss: 0.9154, acc: 98.96% / test_loss: 0.9227, test_acc: 98.20%
Epoch 238 - loss: 0.9146, acc: 99.03% / test_loss: 0.9225, test_acc: 98.21%
Epoch 239 - loss: 0.9146, acc: 99.03% / test_loss: 0.9235, test_acc: 98.13%
Epoch 240 - loss: 0.9142, acc: 99.06% / test_loss: 0.9223, test_acc: 98.26%
Epoch 241 - loss: 0.9163, acc: 98.85% / test_loss: 0.9264, test_acc: 97.80%
Epoch 242 - loss: 0.9155, acc: 98.94% / test_loss: 0.9235, test_acc: 98.14%
Epoch 243 - loss: 0.9160, acc: 98.88% / test_loss: 0.9223, test_acc: 98.26%
Epoch 244 - loss: 0.9162, acc: 98.88% / test_loss: 0.9272, test_acc: 97.73%
Epoch 245 - loss: 0.9154, acc: 98.97% / test_loss: 0.9262, test_acc: 97.81%
Epoch 246 - loss: 0.9162, acc: 98.85% / test_loss: 0.9230, test_acc: 98.18%
Epoch 247 - loss: 0.9153, acc: 98.94% / test_loss: 0.9241, test_acc: 98.08%
Epoch 248 - loss: 0.9155, acc: 98.94% / test_loss: 0.9229, test_acc: 98.17%
Epoch 249 - loss: 0.9143, acc: 99.06% / test_loss: 0.9260, test_acc: 97.83%
Epoch 250 - loss: 0.9147, acc: 99.01% / test_loss: 0.9241, test_acc: 98.07%
Epoch 251 - loss: 0.9143, acc: 99.06% / test_loss: 0.9233, test_acc: 98.15%
Epoch 252 - loss: 0.9144, acc: 99.05% / test_loss: 0.9215, test_acc: 98.33%
Epoch 253 - loss: 0.9138, acc: 99.10% / test_loss: 0.9226, test_acc: 98.20%
Epoch 254 - loss: 0.9154, acc: 98.96% / test_loss: 0.9241, test_acc: 98.07%
Epoch 255 - loss: 0.9140, acc: 99.09% / test_loss: 0.9218, test_acc: 98.29%
Epoch 256 - loss: 0.9133, acc: 99.15% / test_loss: 0.9236, test_acc: 98.10%
Epoch 257 - loss: 0.9154, acc: 98.94% / test_loss: 0.9236, test_acc: 98.11%
Epoch 258 - loss: 0.9152, acc: 99.00% / test_loss: 0.9221, test_acc: 98.27%
Epoch 259 - loss: 0.9141, acc: 99.06% / test_loss: 0.9220, test_acc: 98.28%
Epoch 260 - loss: 0.9137, acc: 99.11% / test_loss: 0.9241, test_acc: 98.09%
Epoch 261 - loss: 0.9146, acc: 99.01% / test_loss: 0.9258, test_acc: 97.91%
Epoch 262 - loss: 0.9141, acc: 99.09% / test_loss: 0.9245, test_acc: 98.04%
Epoch 263 - loss: 0.9155, acc: 98.92% / test_loss: 0.9252, test_acc: 97.95%
Epoch 264 - loss: 0.9166, acc: 98.81% / test_loss: 0.9278, test_acc: 97.72%
Epoch 265 - loss: 0.9152, acc: 98.97% / test_loss: 0.9236, test_acc: 98.12%
Epoch 266 - loss: 0.9132, acc: 99.17% / test_loss: 0.9245, test_acc: 98.04%
Epoch 267 - loss: 0.9149, acc: 98.97% / test_loss: 0.9230, test_acc: 98.15%
Epoch 268 - loss: 0.9131, acc: 99.18% / test_loss: 0.9226, test_acc: 98.21%
Epoch 269 - loss: 0.9153, acc: 98.94% / test_loss: 0.9235, test_acc: 98.14%
Epoch 270 - loss: 0.9159, acc: 98.88% / test_loss: 0.9229, test_acc: 98.21%
Epoch 271 - loss: 0.9145, acc: 99.03% / test_loss: 0.9236, test_acc: 98.10%
Epoch 272 - loss: 0.9139, acc: 99.09% / test_loss: 0.9222, test_acc: 98.26%
Epoch 273 - loss: 0.9134, acc: 99.15% / test_loss: 0.9320, test_acc: 97.25%
Epoch 274 - loss: 0.9154, acc: 98.92% / test_loss: 0.9221, test_acc: 98.27%
Epoch 275 - loss: 0.9140, acc: 99.09% / test_loss: 0.9220, test_acc: 98.26%
Epoch 276 - loss: 0.9137, acc: 99.13% / test_loss: 0.9226, test_acc: 98.24%
Epoch 277 - loss: 0.9136, acc: 99.13% / test_loss: 0.9252, test_acc: 97.95%
Epoch 278 - loss: 0.9147, acc: 99.02% / test_loss: 0.9218, test_acc: 98.29%
Epoch 279 - loss: 0.9155, acc: 98.94% / test_loss: 0.9250, test_acc: 97.95%
Epoch 280 - loss: 0.9165, acc: 98.85% / test_loss: 0.9221, test_acc: 98.29%
Epoch 281 - loss: 0.9141, acc: 99.06% / test_loss: 0.9208, test_acc: 98.41%
Epoch 282 - loss: 0.9130, acc: 99.18% / test_loss: 0.9215, test_acc: 98.32%
Epoch 283 - loss: 0.9134, acc: 99.15% / test_loss: 0.9225, test_acc: 98.23%
Epoch 284 - loss: 0.9138, acc: 99.10% / test_loss: 0.9228, test_acc: 98.16%
Epoch 285 - loss: 0.9151, acc: 98.98% / test_loss: 0.9251, test_acc: 97.98%
Epoch 286 - loss: 0.9155, acc: 98.93% / test_loss: 0.9247, test_acc: 97.98%
Epoch 287 - loss: 0.9157, acc: 98.94% / test_loss: 0.9233, test_acc: 98.15%
Epoch 288 - loss: 0.9156, acc: 98.92% / test_loss: 0.9236, test_acc: 98.15%
Epoch 289 - loss: 0.9142, acc: 99.07% / test_loss: 0.9220, test_acc: 98.30%
Epoch 290 - loss: 0.9137, acc: 99.12% / test_loss: 0.9212, test_acc: 98.36%
Epoch 291 - loss: 0.9133, acc: 99.15% / test_loss: 0.9212, test_acc: 98.34%
Epoch 292 - loss: 0.9132, acc: 99.16% / test_loss: 0.9219, test_acc: 98.32%
Epoch 293 - loss: 0.9147, acc: 99.00% / test_loss: 0.9305, test_acc: 97.41%
Epoch 294 - loss: 0.9151, acc: 98.97% / test_loss: 0.9223, test_acc: 98.25%
Epoch 295 - loss: 0.9154, acc: 98.94% / test_loss: 0.9220, test_acc: 98.26%
Epoch 296 - loss: 0.9143, acc: 99.05% / test_loss: 0.9230, test_acc: 98.17%
Epoch 297 - loss: 0.9140, acc: 99.08% / test_loss: 0.9208, test_acc: 98.41%
Epoch 298 - loss: 0.9133, acc: 99.15% / test_loss: 0.9207, test_acc: 98.41%
Epoch 299 - loss: 0.9134, acc: 99.15% / test_loss: 0.9211, test_acc: 98.38%
Epoch 300 - loss: 0.9136, acc: 99.15% / test_loss: 0.9230, test_acc: 98.18%
Epoch 301 - loss: 0.9147, acc: 99.02% / test_loss: 0.9243, test_acc: 98.04%
Epoch 302 - loss: 0.9139, acc: 99.08% / test_loss: 0.9216, test_acc: 98.30%
Epoch 303 - loss: 0.9134, acc: 99.15% / test_loss: 0.9233, test_acc: 98.14%
Epoch 304 - loss: 0.9159, acc: 98.90% / test_loss: 0.9263, test_acc: 97.84%
Epoch 305 - loss: 0.9133, acc: 99.17% / test_loss: 0.9215, test_acc: 98.34%
Epoch 306 - loss: 0.9126, acc: 99.23% / test_loss: 0.9229, test_acc: 98.19%
Epoch 307 - loss: 0.9134, acc: 99.15% / test_loss: 0.9242, test_acc: 98.07%
Epoch 308 - loss: 0.9133, acc: 99.16% / test_loss: 0.9230, test_acc: 98.19%
Epoch 309 - loss: 0.9131, acc: 99.16% / test_loss: 0.9215, test_acc: 98.32%
Epoch 310 - loss: 0.9152, acc: 98.97% / test_loss: 0.9224, test_acc: 98.23%
Epoch 311 - loss: 0.9132, acc: 99.17% / test_loss: 0.9231, test_acc: 98.14%
Epoch 312 - loss: 0.9124, acc: 99.24% / test_loss: 0.9232, test_acc: 98.13%
Epoch 313 - loss: 0.9141, acc: 99.05% / test_loss: 0.9241, test_acc: 98.05%
Epoch 314 - loss: 0.9129, acc: 99.20% / test_loss: 0.9254, test_acc: 97.93%
Epoch 315 - loss: 0.9156, acc: 98.90% / test_loss: 0.9240, test_acc: 98.06%
Epoch 316 - loss: 0.9137, acc: 99.12% / test_loss: 0.9217, test_acc: 98.32%
Epoch 317 - loss: 0.9126, acc: 99.22% / test_loss: 0.9219, test_acc: 98.32%
Epoch 318 - loss: 0.9136, acc: 99.13% / test_loss: 0.9220, test_acc: 98.29%
Epoch 319 - loss: 0.9134, acc: 99.15% / test_loss: 0.9211, test_acc: 98.37%
Epoch 320 - loss: 0.9144, acc: 99.03% / test_loss: 0.9231, test_acc: 98.17%
Epoch 321 - loss: 0.9145, acc: 99.03% / test_loss: 0.9226, test_acc: 98.17%
Epoch 322 - loss: 0.9125, acc: 99.23% / test_loss: 0.9227, test_acc: 98.20%
Epoch 323 - loss: 0.9142, acc: 99.07% / test_loss: 0.9230, test_acc: 98.20%
Epoch 324 - loss: 0.9137, acc: 99.12% / test_loss: 0.9265, test_acc: 97.83%
Epoch 325 - loss: 0.9147, acc: 99.02% / test_loss: 0.9220, test_acc: 98.30%
Epoch 326 - loss: 0.9130, acc: 99.19% / test_loss: 0.9218, test_acc: 98.31%
Epoch 327 - loss: 0.9127, acc: 99.21% / test_loss: 0.9222, test_acc: 98.26%
Epoch 328 - loss: 0.9132, acc: 99.17% / test_loss: 0.9235, test_acc: 98.13%
Epoch 329 - loss: 0.9166, acc: 98.81% / test_loss: 0.9239, test_acc: 98.07%
Epoch 330 - loss: 0.9141, acc: 99.07% / test_loss: 0.9242, test_acc: 98.05%
Epoch 331 - loss: 0.9131, acc: 99.17% / test_loss: 0.9243, test_acc: 98.06%
Epoch 332 - loss: 0.9133, acc: 99.16% / test_loss: 0.9222, test_acc: 98.26%
Epoch 333 - loss: 0.9129, acc: 99.18% / test_loss: 0.9228, test_acc: 98.18%
Epoch 334 - loss: 0.9130, acc: 99.19% / test_loss: 0.9227, test_acc: 98.21%
Epoch 335 - loss: 0.9150, acc: 98.99% / test_loss: 0.9233, test_acc: 98.14%
Epoch 336 - loss: 0.9152, acc: 98.95% / test_loss: 0.9229, test_acc: 98.21%
Epoch 337 - loss: 0.9133, acc: 99.15% / test_loss: 0.9219, test_acc: 98.29%
Epoch 338 - loss: 0.9143, acc: 99.03% / test_loss: 0.9233, test_acc: 98.16%
Epoch 339 - loss: 0.9153, acc: 98.95% / test_loss: 0.9243, test_acc: 98.03%
Epoch 340 - loss: 0.9146, acc: 99.03% / test_loss: 0.9228, test_acc: 98.17%
Epoch 341 - loss: 0.9126, acc: 99.22% / test_loss: 0.9223, test_acc: 98.25%
Epoch 342 - loss: 0.9124, acc: 99.24% / test_loss: 0.9222, test_acc: 98.26%
Epoch 343 - loss: 0.9124, acc: 99.24% / test_loss: 0.9219, test_acc: 98.30%
Epoch 344 - loss: 0.9124, acc: 99.24% / test_loss: 0.9217, test_acc: 98.32%
Epoch 345 - loss: 0.9124, acc: 99.24% / test_loss: 0.9221, test_acc: 98.28%
Epoch 346 - loss: 0.9124, acc: 99.24% / test_loss: 0.9218, test_acc: 98.28%
Epoch 347 - loss: 0.9124, acc: 99.24% / test_loss: 0.9223, test_acc: 98.22%
Epoch 348 - loss: 0.9124, acc: 99.24% / test_loss: 0.9227, test_acc: 98.19%
Epoch 349 - loss: 0.9196, acc: 98.52% / test_loss: 0.9231, test_acc: 98.15%
Epoch 350 - loss: 0.9151, acc: 98.97% / test_loss: 0.9235, test_acc: 98.11%
Epoch 351 - loss: 0.9137, acc: 99.11% / test_loss: 0.9290, test_acc: 97.59%
Epoch 352 - loss: 0.9151, acc: 98.97% / test_loss: 0.9241, test_acc: 98.07%
Epoch 353 - loss: 0.9133, acc: 99.15% / test_loss: 0.9217, test_acc: 98.29%
Epoch 354 - loss: 0.9127, acc: 99.21% / test_loss: 0.9214, test_acc: 98.35%
Epoch 355 - loss: 0.9126, acc: 99.22% / test_loss: 0.9217, test_acc: 98.31%
Epoch 356 - loss: 0.9139, acc: 99.09% / test_loss: 0.9387, test_acc: 96.62%
Epoch 357 - loss: 0.9163, acc: 98.85% / test_loss: 0.9216, test_acc: 98.31%
Epoch 358 - loss: 0.9149, acc: 99.00% / test_loss: 0.9229, test_acc: 98.20%
Epoch 359 - loss: 0.9143, acc: 99.05% / test_loss: 0.9251, test_acc: 97.96%
Epoch 360 - loss: 0.9136, acc: 99.12% / test_loss: 0.9227, test_acc: 98.19%
Epoch 361 - loss: 0.9139, acc: 99.11% / test_loss: 0.9223, test_acc: 98.24%
Epoch 362 - loss: 0.9140, acc: 99.08% / test_loss: 0.9221, test_acc: 98.29%
Epoch 363 - loss: 0.9141, acc: 99.07% / test_loss: 0.9232, test_acc: 98.14%
Epoch 364 - loss: 0.9141, acc: 99.06% / test_loss: 0.9222, test_acc: 98.23%
Epoch 365 - loss: 0.9136, acc: 99.12% / test_loss: 0.9258, test_acc: 97.94%
Epoch 366 - loss: 0.9142, acc: 99.06% / test_loss: 0.9242, test_acc: 98.07%
Epoch 367 - loss: 0.9148, acc: 99.01% / test_loss: 0.9222, test_acc: 98.21%
Epoch 368 - loss: 0.9127, acc: 99.21% / test_loss: 0.9221, test_acc: 98.26%
Epoch 369 - loss: 0.9127, acc: 99.21% / test_loss: 0.9215, test_acc: 98.32%
Epoch 370 - loss: 0.9136, acc: 99.12% / test_loss: 0.9215, test_acc: 98.31%
Epoch 371 - loss: 0.9124, acc: 99.24% / test_loss: 0.9216, test_acc: 98.32%
Epoch 372 - loss: 0.9124, acc: 99.24% / test_loss: 0.9211, test_acc: 98.39%
Epoch 373 - loss: 0.9152, acc: 98.97% / test_loss: 0.9226, test_acc: 98.22%
Epoch 374 - loss: 0.9142, acc: 99.06% / test_loss: 0.9234, test_acc: 98.13%
Epoch 375 - loss: 0.9135, acc: 99.15% / test_loss: 0.9227, test_acc: 98.20%
Epoch 376 - loss: 0.9140, acc: 99.09% / test_loss: 0.9220, test_acc: 98.28%
Epoch 377 - loss: 0.9131, acc: 99.18% / test_loss: 0.9224, test_acc: 98.25%
Epoch 378 - loss: 0.9140, acc: 99.09% / test_loss: 0.9239, test_acc: 98.07%
Epoch 379 - loss: 0.9134, acc: 99.18% / test_loss: 0.9215, test_acc: 98.33%
Epoch 380 - loss: 0.9123, acc: 99.25% / test_loss: 0.9227, test_acc: 98.18%
Epoch 381 - loss: 0.9141, acc: 99.07% / test_loss: 0.9218, test_acc: 98.32%
Epoch 382 - loss: 0.9127, acc: 99.21% / test_loss: 0.9219, test_acc: 98.29%
Epoch 383 - loss: 0.9129, acc: 99.19% / test_loss: 0.9220, test_acc: 98.27%
Epoch 384 - loss: 0.9130, acc: 99.18% / test_loss: 0.9224, test_acc: 98.23%
Epoch 385 - loss: 0.9135, acc: 99.15% / test_loss: 0.9221, test_acc: 98.28%
Epoch 386 - loss: 0.9131, acc: 99.19% / test_loss: 0.9220, test_acc: 98.29%
Epoch 387 - loss: 0.9129, acc: 99.20% / test_loss: 0.9222, test_acc: 98.23%
Epoch 388 - loss: 0.9157, acc: 98.90% / test_loss: 0.9247, test_acc: 97.98%
Epoch 389 - loss: 0.9143, acc: 99.05% / test_loss: 0.9233, test_acc: 98.17%
Epoch 390 - loss: 0.9133, acc: 99.15% / test_loss: 0.9240, test_acc: 98.10%
Epoch 391 - loss: 0.9135, acc: 99.13% / test_loss: 0.9241, test_acc: 98.07%
Epoch 392 - loss: 0.9155, acc: 98.93% / test_loss: 0.9231, test_acc: 98.17%
Epoch 393 - loss: 0.9144, acc: 99.04% / test_loss: 0.9233, test_acc: 98.16%
Epoch 394 - loss: 0.9132, acc: 99.17% / test_loss: 0.9230, test_acc: 98.17%
Epoch 395 - loss: 0.9131, acc: 99.19% / test_loss: 0.9220, test_acc: 98.29%
Epoch 396 - loss: 0.9141, acc: 99.06% / test_loss: 0.9229, test_acc: 98.18%
Epoch 397 - loss: 0.9152, acc: 98.96% / test_loss: 0.9232, test_acc: 98.14%
Epoch 398 - loss: 0.9130, acc: 99.18% / test_loss: 0.9233, test_acc: 98.17%
Epoch 399 - loss: 0.9133, acc: 99.16% / test_loss: 0.9233, test_acc: 98.14%
Epoch 400 - loss: 0.9144, acc: 99.06% / test_loss: 0.9259, test_acc: 97.89%
Best test accuracy 98.41% in epoch 281.
----------------------------------------------------------------------------------------------------
Run 4
Epoch 1 - loss: 1.3437, acc: 57.12% / test_loss: 1.2475, test_acc: 63.71%
Epoch 2 - loss: 1.1105, acc: 80.26% / test_loss: 1.0988, test_acc: 80.91%
Epoch 3 - loss: 1.0926, acc: 81.47% / test_loss: 1.0953, test_acc: 81.34%
Epoch 4 - loss: 1.0871, acc: 81.77% / test_loss: 1.0791, test_acc: 82.70%
Epoch 5 - loss: 1.0826, acc: 82.20% / test_loss: 1.0871, test_acc: 81.94%
Epoch 6 - loss: 1.0662, acc: 84.01% / test_loss: 1.0328, test_acc: 87.56%
Epoch 7 - loss: 1.0377, acc: 86.99% / test_loss: 1.0446, test_acc: 86.19%
Epoch 8 - loss: 1.0316, acc: 87.49% / test_loss: 1.0257, test_acc: 88.04%
Epoch 9 - loss: 1.0265, acc: 87.92% / test_loss: 1.0217, test_acc: 88.30%
Epoch 10 - loss: 1.0262, acc: 87.92% / test_loss: 1.0162, test_acc: 88.89%
Epoch 11 - loss: 1.0244, acc: 88.03% / test_loss: 1.0166, test_acc: 88.84%
Epoch 12 - loss: 1.0222, acc: 88.31% / test_loss: 1.0172, test_acc: 88.76%
Epoch 13 - loss: 1.0208, acc: 88.46% / test_loss: 1.0245, test_acc: 88.09%
Epoch 14 - loss: 1.0207, acc: 88.37% / test_loss: 1.0138, test_acc: 89.04%
Epoch 15 - loss: 1.0204, acc: 88.35% / test_loss: 1.0114, test_acc: 89.26%
Epoch 16 - loss: 1.0175, acc: 88.70% / test_loss: 1.0115, test_acc: 89.26%
Epoch 17 - loss: 1.0177, acc: 88.64% / test_loss: 1.0099, test_acc: 89.41%
Epoch 18 - loss: 1.0140, acc: 89.07% / test_loss: 1.0100, test_acc: 89.41%
Epoch 19 - loss: 1.0145, acc: 88.95% / test_loss: 1.0105, test_acc: 89.29%
Epoch 20 - loss: 1.0132, acc: 89.01% / test_loss: 1.0053, test_acc: 89.83%
Epoch 21 - loss: 1.0108, acc: 89.31% / test_loss: 1.0074, test_acc: 89.60%
Epoch 22 - loss: 1.0094, acc: 89.48% / test_loss: 1.0071, test_acc: 89.68%
Epoch 23 - loss: 1.0110, acc: 89.27% / test_loss: 1.0045, test_acc: 89.88%
Epoch 24 - loss: 1.0094, acc: 89.37% / test_loss: 1.0033, test_acc: 90.00%
Epoch 25 - loss: 1.0072, acc: 89.60% / test_loss: 1.0055, test_acc: 89.91%
Epoch 26 - loss: 1.0083, acc: 89.53% / test_loss: 1.0047, test_acc: 89.90%
Epoch 27 - loss: 1.0070, acc: 89.63% / test_loss: 1.0041, test_acc: 89.88%
Epoch 28 - loss: 1.0071, acc: 89.60% / test_loss: 1.0033, test_acc: 89.99%
Epoch 29 - loss: 1.0067, acc: 89.66% / test_loss: 1.0051, test_acc: 89.88%
Epoch 30 - loss: 1.0077, acc: 89.57% / test_loss: 1.0046, test_acc: 89.90%
Epoch 31 - loss: 1.0059, acc: 89.73% / test_loss: 1.0034, test_acc: 90.00%
Epoch 32 - loss: 1.0063, acc: 89.69% / test_loss: 1.0001, test_acc: 90.28%
Epoch 33 - loss: 1.0047, acc: 89.83% / test_loss: 1.0032, test_acc: 90.03%
Epoch 34 - loss: 1.0046, acc: 89.84% / test_loss: 1.0022, test_acc: 90.13%
Epoch 35 - loss: 1.0053, acc: 89.77% / test_loss: 1.0066, test_acc: 89.75%
Epoch 36 - loss: 1.0054, acc: 89.74% / test_loss: 1.0044, test_acc: 89.97%
Epoch 37 - loss: 1.0040, acc: 89.93% / test_loss: 1.0001, test_acc: 90.26%
Epoch 38 - loss: 1.0032, acc: 89.95% / test_loss: 0.9983, test_acc: 90.41%
Epoch 39 - loss: 1.0015, acc: 90.13% / test_loss: 0.9990, test_acc: 90.31%
Epoch 40 - loss: 1.0025, acc: 90.05% / test_loss: 0.9996, test_acc: 90.28%
Epoch 41 - loss: 1.0020, acc: 90.07% / test_loss: 1.0001, test_acc: 90.24%
Epoch 42 - loss: 1.0013, acc: 90.16% / test_loss: 0.9975, test_acc: 90.43%
Epoch 43 - loss: 1.0002, acc: 90.29% / test_loss: 0.9976, test_acc: 90.52%
Epoch 44 - loss: 1.0006, acc: 90.25% / test_loss: 1.0012, test_acc: 90.17%
Epoch 45 - loss: 1.0014, acc: 90.11% / test_loss: 0.9997, test_acc: 90.26%
Epoch 46 - loss: 1.0000, acc: 90.26% / test_loss: 0.9977, test_acc: 90.44%
Epoch 47 - loss: 1.0002, acc: 90.27% / test_loss: 0.9970, test_acc: 90.54%
Epoch 48 - loss: 0.9983, acc: 90.42% / test_loss: 0.9966, test_acc: 90.55%
Epoch 49 - loss: 0.9995, acc: 90.35% / test_loss: 1.0001, test_acc: 90.28%
Epoch 50 - loss: 0.9988, acc: 90.37% / test_loss: 0.9962, test_acc: 90.65%
Epoch 51 - loss: 0.9969, acc: 90.52% / test_loss: 0.9962, test_acc: 90.62%
Epoch 52 - loss: 0.9994, acc: 90.30% / test_loss: 0.9962, test_acc: 90.62%
Epoch 53 - loss: 0.9983, acc: 90.44% / test_loss: 0.9967, test_acc: 90.62%
Epoch 54 - loss: 0.9956, acc: 90.69% / test_loss: 0.9940, test_acc: 90.83%
Epoch 55 - loss: 0.9947, acc: 90.74% / test_loss: 0.9951, test_acc: 90.70%
Epoch 56 - loss: 0.9941, acc: 90.83% / test_loss: 0.9967, test_acc: 90.63%
Epoch 57 - loss: 0.9951, acc: 90.71% / test_loss: 0.9931, test_acc: 90.96%
Epoch 58 - loss: 0.9932, acc: 90.92% / test_loss: 0.9918, test_acc: 90.99%
Epoch 59 - loss: 0.9918, acc: 91.06% / test_loss: 0.9881, test_acc: 91.44%
Epoch 60 - loss: 0.9877, acc: 91.42% / test_loss: 0.9868, test_acc: 91.55%
Epoch 61 - loss: 0.9858, acc: 91.66% / test_loss: 0.9905, test_acc: 91.26%
Epoch 62 - loss: 0.9846, acc: 91.76% / test_loss: 0.9843, test_acc: 91.82%
Epoch 63 - loss: 0.9821, acc: 91.98% / test_loss: 0.9843, test_acc: 91.73%
Epoch 64 - loss: 0.9815, acc: 92.10% / test_loss: 0.9820, test_acc: 92.04%
Epoch 65 - loss: 0.9795, acc: 92.28% / test_loss: 0.9836, test_acc: 91.93%
Epoch 66 - loss: 0.9797, acc: 92.25% / test_loss: 0.9816, test_acc: 92.02%
Epoch 67 - loss: 0.9790, acc: 92.32% / test_loss: 0.9812, test_acc: 92.06%
Epoch 68 - loss: 0.9791, acc: 92.32% / test_loss: 0.9796, test_acc: 92.31%
Epoch 69 - loss: 0.9785, acc: 92.38% / test_loss: 0.9790, test_acc: 92.30%
Epoch 70 - loss: 0.9777, acc: 92.46% / test_loss: 0.9791, test_acc: 92.31%
Epoch 71 - loss: 0.9778, acc: 92.40% / test_loss: 0.9864, test_acc: 91.64%
Epoch 72 - loss: 0.9772, acc: 92.50% / test_loss: 0.9788, test_acc: 92.30%
Epoch 73 - loss: 0.9768, acc: 92.51% / test_loss: 0.9797, test_acc: 92.22%
Epoch 74 - loss: 0.9775, acc: 92.42% / test_loss: 0.9827, test_acc: 91.96%
Epoch 75 - loss: 0.9765, acc: 92.51% / test_loss: 0.9778, test_acc: 92.41%
Epoch 76 - loss: 0.9756, acc: 92.64% / test_loss: 0.9792, test_acc: 92.28%
Epoch 77 - loss: 0.9756, acc: 92.65% / test_loss: 0.9796, test_acc: 92.22%
Epoch 78 - loss: 0.9763, acc: 92.53% / test_loss: 0.9793, test_acc: 92.32%
Epoch 79 - loss: 0.9751, acc: 92.65% / test_loss: 0.9820, test_acc: 91.99%
Epoch 80 - loss: 0.9754, acc: 92.63% / test_loss: 0.9773, test_acc: 92.42%
Epoch 81 - loss: 0.9751, acc: 92.69% / test_loss: 0.9917, test_acc: 90.98%
Epoch 82 - loss: 0.9766, acc: 92.55% / test_loss: 0.9811, test_acc: 92.07%
Epoch 83 - loss: 0.9750, acc: 92.69% / test_loss: 0.9783, test_acc: 92.37%
Epoch 84 - loss: 0.9737, acc: 92.82% / test_loss: 0.9772, test_acc: 92.46%
Epoch 85 - loss: 0.9743, acc: 92.76% / test_loss: 0.9784, test_acc: 92.36%
Epoch 86 - loss: 0.9742, acc: 92.76% / test_loss: 0.9765, test_acc: 92.53%
Epoch 87 - loss: 0.9734, acc: 92.84% / test_loss: 0.9798, test_acc: 92.27%
Epoch 88 - loss: 0.9731, acc: 92.86% / test_loss: 0.9762, test_acc: 92.56%
Epoch 89 - loss: 0.9731, acc: 92.87% / test_loss: 0.9765, test_acc: 92.45%
Epoch 90 - loss: 0.9734, acc: 92.82% / test_loss: 0.9763, test_acc: 92.56%
Epoch 91 - loss: 0.9729, acc: 92.88% / test_loss: 0.9764, test_acc: 92.55%
Epoch 92 - loss: 0.9740, acc: 92.77% / test_loss: 0.9764, test_acc: 92.53%
Epoch 93 - loss: 0.9718, acc: 93.04% / test_loss: 0.9737, test_acc: 92.79%
Epoch 94 - loss: 0.9711, acc: 93.02% / test_loss: 0.9751, test_acc: 92.71%
Epoch 95 - loss: 0.9712, acc: 93.05% / test_loss: 0.9750, test_acc: 92.65%
Epoch 96 - loss: 0.9718, acc: 93.01% / test_loss: 0.9747, test_acc: 92.72%
Epoch 97 - loss: 0.9704, acc: 93.11% / test_loss: 0.9752, test_acc: 92.62%
Epoch 98 - loss: 0.9700, acc: 93.16% / test_loss: 0.9749, test_acc: 92.62%
Epoch 99 - loss: 0.9713, acc: 93.03% / test_loss: 0.9738, test_acc: 92.80%
Epoch 100 - loss: 0.9699, acc: 93.16% / test_loss: 0.9738, test_acc: 92.74%
Epoch 101 - loss: 0.9692, acc: 93.19% / test_loss: 0.9732, test_acc: 92.80%
Epoch 102 - loss: 0.9714, acc: 92.99% / test_loss: 0.9733, test_acc: 92.83%
Epoch 103 - loss: 0.9691, acc: 93.24% / test_loss: 0.9736, test_acc: 92.77%
Epoch 104 - loss: 0.9687, acc: 93.25% / test_loss: 0.9770, test_acc: 92.53%
Epoch 105 - loss: 0.9711, acc: 93.05% / test_loss: 0.9729, test_acc: 92.88%
Epoch 106 - loss: 0.9690, acc: 93.22% / test_loss: 0.9737, test_acc: 92.71%
Epoch 107 - loss: 0.9681, acc: 93.30% / test_loss: 0.9716, test_acc: 92.92%
Epoch 108 - loss: 0.9683, acc: 93.30% / test_loss: 0.9755, test_acc: 92.63%
Epoch 109 - loss: 0.9683, acc: 93.30% / test_loss: 0.9759, test_acc: 92.53%
Epoch 110 - loss: 0.9686, acc: 93.30% / test_loss: 0.9750, test_acc: 92.69%
Epoch 111 - loss: 0.9688, acc: 93.25% / test_loss: 0.9728, test_acc: 92.89%
Epoch 112 - loss: 0.9683, acc: 93.30% / test_loss: 0.9741, test_acc: 92.72%
Epoch 113 - loss: 0.9688, acc: 93.27% / test_loss: 0.9720, test_acc: 92.90%
Epoch 114 - loss: 0.9684, acc: 93.27% / test_loss: 0.9724, test_acc: 92.86%
Epoch 115 - loss: 0.9678, acc: 93.33% / test_loss: 0.9738, test_acc: 92.77%
Epoch 116 - loss: 0.9685, acc: 93.27% / test_loss: 0.9709, test_acc: 93.05%
Epoch 117 - loss: 0.9682, acc: 93.33% / test_loss: 0.9729, test_acc: 92.91%
Epoch 118 - loss: 0.9679, acc: 93.35% / test_loss: 0.9720, test_acc: 92.93%
Epoch 119 - loss: 0.9676, acc: 93.37% / test_loss: 0.9711, test_acc: 93.02%
Epoch 120 - loss: 0.9658, acc: 93.52% / test_loss: 0.9714, test_acc: 93.01%
Epoch 121 - loss: 0.9668, acc: 93.43% / test_loss: 0.9704, test_acc: 93.14%
Epoch 122 - loss: 0.9672, acc: 93.42% / test_loss: 0.9712, test_acc: 92.99%
Epoch 123 - loss: 0.9663, acc: 93.48% / test_loss: 0.9702, test_acc: 93.08%
Epoch 124 - loss: 0.9656, acc: 93.52% / test_loss: 0.9732, test_acc: 92.77%
Epoch 125 - loss: 0.9680, acc: 93.32% / test_loss: 0.9775, test_acc: 92.32%
Epoch 126 - loss: 0.9661, acc: 93.51% / test_loss: 0.9718, test_acc: 92.95%
Epoch 127 - loss: 0.9658, acc: 93.54% / test_loss: 0.9721, test_acc: 92.85%
Epoch 128 - loss: 0.9673, acc: 93.39% / test_loss: 0.9706, test_acc: 93.05%
Epoch 129 - loss: 0.9666, acc: 93.45% / test_loss: 0.9736, test_acc: 92.71%
Epoch 130 - loss: 0.9644, acc: 93.70% / test_loss: 0.9696, test_acc: 93.18%
Epoch 131 - loss: 0.9652, acc: 93.60% / test_loss: 0.9727, test_acc: 92.81%
Epoch 132 - loss: 0.9628, acc: 93.83% / test_loss: 0.9638, test_acc: 93.40%
Epoch 133 - loss: 0.9226, acc: 98.22% / test_loss: 0.9274, test_acc: 97.77%
Epoch 134 - loss: 0.9188, acc: 98.63% / test_loss: 0.9277, test_acc: 97.73%
Epoch 135 - loss: 0.9193, acc: 98.59% / test_loss: 0.9274, test_acc: 97.77%
Epoch 136 - loss: 0.9185, acc: 98.65% / test_loss: 0.9298, test_acc: 97.54%
Epoch 137 - loss: 0.9193, acc: 98.57% / test_loss: 0.9274, test_acc: 97.77%
Epoch 138 - loss: 0.9187, acc: 98.65% / test_loss: 0.9268, test_acc: 97.80%
Epoch 139 - loss: 0.9185, acc: 98.65% / test_loss: 0.9264, test_acc: 97.82%
Epoch 140 - loss: 0.9176, acc: 98.75% / test_loss: 0.9260, test_acc: 97.87%
Epoch 141 - loss: 0.9194, acc: 98.55% / test_loss: 0.9251, test_acc: 98.00%
Epoch 142 - loss: 0.9186, acc: 98.64% / test_loss: 0.9238, test_acc: 98.11%
Epoch 143 - loss: 0.9187, acc: 98.62% / test_loss: 0.9299, test_acc: 97.55%
Epoch 144 - loss: 0.9193, acc: 98.59% / test_loss: 0.9282, test_acc: 97.66%
Epoch 145 - loss: 0.9193, acc: 98.54% / test_loss: 0.9251, test_acc: 97.93%
Epoch 146 - loss: 0.9172, acc: 98.78% / test_loss: 0.9255, test_acc: 97.95%
Epoch 147 - loss: 0.9172, acc: 98.78% / test_loss: 0.9250, test_acc: 97.95%
Epoch 148 - loss: 0.9181, acc: 98.66% / test_loss: 0.9264, test_acc: 97.83%
Epoch 149 - loss: 0.9183, acc: 98.68% / test_loss: 0.9248, test_acc: 98.01%
Epoch 150 - loss: 0.9181, acc: 98.67% / test_loss: 0.9268, test_acc: 97.76%
Epoch 151 - loss: 0.9176, acc: 98.75% / test_loss: 0.9249, test_acc: 97.98%
Epoch 152 - loss: 0.9182, acc: 98.67% / test_loss: 0.9243, test_acc: 98.07%
Epoch 153 - loss: 0.9180, acc: 98.70% / test_loss: 0.9238, test_acc: 98.10%
Epoch 154 - loss: 0.9162, acc: 98.89% / test_loss: 0.9254, test_acc: 97.93%
Epoch 155 - loss: 0.9180, acc: 98.68% / test_loss: 0.9248, test_acc: 97.98%
Epoch 156 - loss: 0.9168, acc: 98.82% / test_loss: 0.9244, test_acc: 98.04%
Epoch 157 - loss: 0.9167, acc: 98.81% / test_loss: 0.9245, test_acc: 98.03%
Epoch 158 - loss: 0.9179, acc: 98.71% / test_loss: 0.9250, test_acc: 97.98%
Epoch 159 - loss: 0.9162, acc: 98.87% / test_loss: 0.9244, test_acc: 98.07%
Epoch 160 - loss: 0.9178, acc: 98.72% / test_loss: 0.9267, test_acc: 97.80%
Epoch 161 - loss: 0.9168, acc: 98.82% / test_loss: 0.9274, test_acc: 97.73%
Epoch 162 - loss: 0.9189, acc: 98.61% / test_loss: 0.9246, test_acc: 97.99%
Epoch 163 - loss: 0.9172, acc: 98.77% / test_loss: 0.9243, test_acc: 98.03%
Epoch 164 - loss: 0.9182, acc: 98.68% / test_loss: 0.9259, test_acc: 97.92%
Epoch 165 - loss: 0.9177, acc: 98.72% / test_loss: 0.9252, test_acc: 97.92%
Epoch 166 - loss: 0.9165, acc: 98.83% / test_loss: 0.9248, test_acc: 97.98%
Epoch 167 - loss: 0.9164, acc: 98.85% / test_loss: 0.9249, test_acc: 98.02%
Epoch 168 - loss: 0.9174, acc: 98.75% / test_loss: 0.9245, test_acc: 98.05%
Epoch 169 - loss: 0.9170, acc: 98.78% / test_loss: 0.9237, test_acc: 98.13%
Epoch 170 - loss: 0.9168, acc: 98.80% / test_loss: 0.9252, test_acc: 97.93%
Epoch 171 - loss: 0.9173, acc: 98.78% / test_loss: 0.9254, test_acc: 97.92%
Epoch 172 - loss: 0.9156, acc: 98.91% / test_loss: 0.9238, test_acc: 98.10%
Epoch 173 - loss: 0.9165, acc: 98.84% / test_loss: 0.9261, test_acc: 97.86%
Epoch 174 - loss: 0.9156, acc: 98.90% / test_loss: 0.9240, test_acc: 98.10%
Epoch 175 - loss: 0.9162, acc: 98.88% / test_loss: 0.9249, test_acc: 98.03%
Epoch 176 - loss: 0.9160, acc: 98.91% / test_loss: 0.9234, test_acc: 98.15%
Epoch 177 - loss: 0.9150, acc: 99.00% / test_loss: 0.9250, test_acc: 97.97%
Epoch 178 - loss: 0.9186, acc: 98.61% / test_loss: 0.9250, test_acc: 97.99%
Epoch 179 - loss: 0.9171, acc: 98.76% / test_loss: 0.9255, test_acc: 98.01%
Epoch 180 - loss: 0.9166, acc: 98.85% / test_loss: 0.9250, test_acc: 97.98%
Epoch 181 - loss: 0.9164, acc: 98.84% / test_loss: 0.9233, test_acc: 98.10%
Epoch 182 - loss: 0.9174, acc: 98.75% / test_loss: 0.9243, test_acc: 98.06%
Epoch 183 - loss: 0.9165, acc: 98.84% / test_loss: 0.9241, test_acc: 98.04%
Epoch 184 - loss: 0.9162, acc: 98.87% / test_loss: 0.9239, test_acc: 98.08%
Epoch 185 - loss: 0.9174, acc: 98.76% / test_loss: 0.9236, test_acc: 98.10%
Epoch 186 - loss: 0.9166, acc: 98.84% / test_loss: 0.9234, test_acc: 98.10%
Epoch 187 - loss: 0.9158, acc: 98.91% / test_loss: 0.9227, test_acc: 98.20%
Epoch 188 - loss: 0.9175, acc: 98.72% / test_loss: 0.9243, test_acc: 98.04%
Epoch 189 - loss: 0.9180, acc: 98.69% / test_loss: 0.9270, test_acc: 97.75%
Epoch 190 - loss: 0.9185, acc: 98.65% / test_loss: 0.9237, test_acc: 98.17%
Epoch 191 - loss: 0.9181, acc: 98.66% / test_loss: 0.9240, test_acc: 98.04%
Epoch 192 - loss: 0.9162, acc: 98.86% / test_loss: 0.9240, test_acc: 98.06%
Epoch 193 - loss: 0.9153, acc: 98.96% / test_loss: 0.9246, test_acc: 98.02%
Epoch 194 - loss: 0.9164, acc: 98.85% / test_loss: 0.9234, test_acc: 98.14%
Epoch 195 - loss: 0.9165, acc: 98.83% / test_loss: 0.9250, test_acc: 97.99%
Epoch 196 - loss: 0.9158, acc: 98.94% / test_loss: 0.9231, test_acc: 98.17%
Epoch 197 - loss: 0.9160, acc: 98.88% / test_loss: 0.9256, test_acc: 97.95%
Epoch 198 - loss: 0.9153, acc: 98.97% / test_loss: 0.9222, test_acc: 98.26%
Epoch 199 - loss: 0.9134, acc: 99.12% / test_loss: 0.9198, test_acc: 98.49%
Epoch 200 - loss: 0.9167, acc: 98.80% / test_loss: 0.9220, test_acc: 98.32%
Epoch 201 - loss: 0.9136, acc: 99.12% / test_loss: 0.9223, test_acc: 98.21%
Epoch 202 - loss: 0.9146, acc: 99.03% / test_loss: 0.9209, test_acc: 98.39%
Epoch 203 - loss: 0.9129, acc: 99.18% / test_loss: 0.9230, test_acc: 98.20%
Epoch 204 - loss: 0.9132, acc: 99.19% / test_loss: 0.9205, test_acc: 98.44%
Epoch 205 - loss: 0.9128, acc: 99.20% / test_loss: 0.9246, test_acc: 98.03%
Epoch 206 - loss: 0.9136, acc: 99.12% / test_loss: 0.9213, test_acc: 98.35%
Epoch 207 - loss: 0.9145, acc: 99.02% / test_loss: 0.9231, test_acc: 98.17%
Epoch 208 - loss: 0.9125, acc: 99.24% / test_loss: 0.9201, test_acc: 98.50%
Epoch 209 - loss: 0.9124, acc: 99.27% / test_loss: 0.9183, test_acc: 98.65%
Epoch 210 - loss: 0.9126, acc: 99.22% / test_loss: 0.9197, test_acc: 98.51%
Epoch 211 - loss: 0.9122, acc: 99.28% / test_loss: 0.9192, test_acc: 98.57%
Epoch 212 - loss: 0.9117, acc: 99.33% / test_loss: 0.9202, test_acc: 98.47%
Epoch 213 - loss: 0.9129, acc: 99.18% / test_loss: 0.9234, test_acc: 98.12%
Epoch 214 - loss: 0.9143, acc: 99.04% / test_loss: 0.9205, test_acc: 98.44%
Epoch 215 - loss: 0.9116, acc: 99.32% / test_loss: 0.9206, test_acc: 98.42%
Epoch 216 - loss: 0.9137, acc: 99.11% / test_loss: 0.9241, test_acc: 98.08%
Epoch 217 - loss: 0.9128, acc: 99.21% / test_loss: 0.9235, test_acc: 98.14%
Epoch 218 - loss: 0.9122, acc: 99.28% / test_loss: 0.9181, test_acc: 98.67%
Epoch 219 - loss: 0.9123, acc: 99.24% / test_loss: 0.9206, test_acc: 98.43%
Epoch 220 - loss: 0.9138, acc: 99.09% / test_loss: 0.9192, test_acc: 98.56%
Epoch 221 - loss: 0.9124, acc: 99.24% / test_loss: 0.9182, test_acc: 98.65%
Epoch 222 - loss: 0.9138, acc: 99.09% / test_loss: 0.9223, test_acc: 98.29%
Epoch 223 - loss: 0.9127, acc: 99.23% / test_loss: 0.9179, test_acc: 98.70%
Epoch 224 - loss: 0.9127, acc: 99.23% / test_loss: 0.9204, test_acc: 98.39%
Epoch 225 - loss: 0.9120, acc: 99.30% / test_loss: 0.9202, test_acc: 98.45%
Epoch 226 - loss: 0.9116, acc: 99.34% / test_loss: 0.9198, test_acc: 98.51%
Epoch 227 - loss: 0.9120, acc: 99.28% / test_loss: 0.9281, test_acc: 97.67%
Epoch 228 - loss: 0.9131, acc: 99.18% / test_loss: 0.9207, test_acc: 98.42%
Epoch 229 - loss: 0.9119, acc: 99.31% / test_loss: 0.9191, test_acc: 98.57%
Epoch 230 - loss: 0.9109, acc: 99.41% / test_loss: 0.9182, test_acc: 98.68%
Epoch 231 - loss: 0.9110, acc: 99.37% / test_loss: 0.9232, test_acc: 98.13%
Epoch 232 - loss: 0.9118, acc: 99.31% / test_loss: 0.9190, test_acc: 98.58%
Epoch 233 - loss: 0.9129, acc: 99.19% / test_loss: 0.9185, test_acc: 98.67%
Epoch 234 - loss: 0.9112, acc: 99.37% / test_loss: 0.9198, test_acc: 98.53%
Epoch 235 - loss: 0.9142, acc: 99.07% / test_loss: 0.9191, test_acc: 98.60%
Epoch 236 - loss: 0.9101, acc: 99.49% / test_loss: 0.9183, test_acc: 98.66%
Epoch 237 - loss: 0.9116, acc: 99.31% / test_loss: 0.9186, test_acc: 98.63%
Epoch 238 - loss: 0.9133, acc: 99.17% / test_loss: 0.9189, test_acc: 98.58%
Epoch 239 - loss: 0.9118, acc: 99.32% / test_loss: 0.9187, test_acc: 98.60%
Epoch 240 - loss: 0.9105, acc: 99.43% / test_loss: 0.9186, test_acc: 98.62%
Epoch 241 - loss: 0.9119, acc: 99.30% / test_loss: 0.9197, test_acc: 98.51%
Epoch 242 - loss: 0.9135, acc: 99.11% / test_loss: 0.9189, test_acc: 98.57%
Epoch 243 - loss: 0.9110, acc: 99.41% / test_loss: 0.9186, test_acc: 98.61%
Epoch 244 - loss: 0.9110, acc: 99.38% / test_loss: 0.9179, test_acc: 98.67%
Epoch 245 - loss: 0.9112, acc: 99.37% / test_loss: 0.9220, test_acc: 98.26%
Epoch 246 - loss: 0.9127, acc: 99.21% / test_loss: 0.9266, test_acc: 97.78%
Epoch 247 - loss: 0.9118, acc: 99.29% / test_loss: 0.9192, test_acc: 98.57%
Epoch 248 - loss: 0.9103, acc: 99.48% / test_loss: 0.9197, test_acc: 98.48%
Epoch 249 - loss: 0.9112, acc: 99.37% / test_loss: 0.9186, test_acc: 98.61%
Epoch 250 - loss: 0.9101, acc: 99.47% / test_loss: 0.9179, test_acc: 98.69%
Epoch 251 - loss: 0.9115, acc: 99.31% / test_loss: 0.9214, test_acc: 98.38%
Epoch 252 - loss: 0.9114, acc: 99.35% / test_loss: 0.9218, test_acc: 98.28%
Epoch 253 - loss: 0.9104, acc: 99.44% / test_loss: 0.9179, test_acc: 98.72%
Epoch 254 - loss: 0.9110, acc: 99.38% / test_loss: 0.9198, test_acc: 98.51%
Epoch 255 - loss: 0.9105, acc: 99.44% / test_loss: 0.9178, test_acc: 98.69%
Epoch 256 - loss: 0.9112, acc: 99.34% / test_loss: 0.9200, test_acc: 98.47%
Epoch 257 - loss: 0.9110, acc: 99.40% / test_loss: 0.9190, test_acc: 98.58%
Epoch 258 - loss: 0.9118, acc: 99.31% / test_loss: 0.9196, test_acc: 98.53%
Epoch 259 - loss: 0.9106, acc: 99.43% / test_loss: 0.9188, test_acc: 98.58%
Epoch 260 - loss: 0.9106, acc: 99.43% / test_loss: 0.9210, test_acc: 98.35%
Epoch 261 - loss: 0.9106, acc: 99.41% / test_loss: 0.9198, test_acc: 98.51%
Epoch 262 - loss: 0.9097, acc: 99.52% / test_loss: 0.9182, test_acc: 98.63%
Epoch 263 - loss: 0.9105, acc: 99.42% / test_loss: 0.9189, test_acc: 98.56%
Epoch 264 - loss: 0.9101, acc: 99.49% / test_loss: 0.9185, test_acc: 98.65%
Epoch 265 - loss: 0.9099, acc: 99.50% / test_loss: 0.9183, test_acc: 98.67%
Epoch 266 - loss: 0.9111, acc: 99.37% / test_loss: 0.9188, test_acc: 98.60%
Epoch 267 - loss: 0.9110, acc: 99.40% / test_loss: 0.9219, test_acc: 98.27%
Epoch 268 - loss: 0.9125, acc: 99.21% / test_loss: 0.9183, test_acc: 98.66%
Epoch 269 - loss: 0.9103, acc: 99.45% / test_loss: 0.9193, test_acc: 98.57%
Epoch 270 - loss: 0.9106, acc: 99.40% / test_loss: 0.9184, test_acc: 98.66%
Epoch 271 - loss: 0.9118, acc: 99.29% / test_loss: 0.9190, test_acc: 98.56%
Epoch 272 - loss: 0.9118, acc: 99.30% / test_loss: 0.9184, test_acc: 98.63%
Epoch 273 - loss: 0.9106, acc: 99.43% / test_loss: 0.9172, test_acc: 98.75%
Epoch 274 - loss: 0.9101, acc: 99.50% / test_loss: 0.9171, test_acc: 98.78%
Epoch 275 - loss: 0.9099, acc: 99.51% / test_loss: 0.9179, test_acc: 98.68%
Epoch 276 - loss: 0.9098, acc: 99.52% / test_loss: 0.9189, test_acc: 98.57%
Epoch 277 - loss: 0.9121, acc: 99.26% / test_loss: 0.9220, test_acc: 98.27%
Epoch 278 - loss: 0.9108, acc: 99.40% / test_loss: 0.9182, test_acc: 98.66%
Epoch 279 - loss: 0.9108, acc: 99.40% / test_loss: 0.9187, test_acc: 98.64%
Epoch 280 - loss: 0.9104, acc: 99.45% / test_loss: 0.9187, test_acc: 98.62%
Epoch 281 - loss: 0.9100, acc: 99.48% / test_loss: 0.9189, test_acc: 98.60%
Epoch 282 - loss: 0.9098, acc: 99.52% / test_loss: 0.9189, test_acc: 98.55%
Epoch 283 - loss: 0.9114, acc: 99.35% / test_loss: 0.9187, test_acc: 98.63%
Epoch 284 - loss: 0.9121, acc: 99.26% / test_loss: 0.9185, test_acc: 98.62%
Epoch 285 - loss: 0.9114, acc: 99.34% / test_loss: 0.9241, test_acc: 98.08%
Epoch 286 - loss: 0.9104, acc: 99.44% / test_loss: 0.9187, test_acc: 98.62%
Epoch 287 - loss: 0.9108, acc: 99.39% / test_loss: 0.9193, test_acc: 98.54%
Epoch 288 - loss: 0.9096, acc: 99.52% / test_loss: 0.9180, test_acc: 98.69%
Epoch 289 - loss: 0.9103, acc: 99.44% / test_loss: 0.9192, test_acc: 98.54%
Epoch 290 - loss: 0.9119, acc: 99.28% / test_loss: 0.9193, test_acc: 98.54%
Epoch 291 - loss: 0.9107, acc: 99.42% / test_loss: 0.9231, test_acc: 98.14%
Epoch 292 - loss: 0.9100, acc: 99.48% / test_loss: 0.9185, test_acc: 98.64%
Epoch 293 - loss: 0.9099, acc: 99.49% / test_loss: 0.9190, test_acc: 98.59%
Epoch 294 - loss: 0.9105, acc: 99.45% / test_loss: 0.9187, test_acc: 98.62%
Epoch 295 - loss: 0.9108, acc: 99.40% / test_loss: 0.9195, test_acc: 98.54%
Epoch 296 - loss: 0.9109, acc: 99.40% / test_loss: 0.9192, test_acc: 98.54%
Epoch 297 - loss: 0.9099, acc: 99.49% / test_loss: 0.9175, test_acc: 98.75%
Epoch 298 - loss: 0.9105, acc: 99.43% / test_loss: 0.9185, test_acc: 98.62%
Epoch 299 - loss: 0.9109, acc: 99.40% / test_loss: 0.9260, test_acc: 97.91%
Epoch 300 - loss: 0.9154, acc: 98.95% / test_loss: 0.9195, test_acc: 98.54%
Epoch 301 - loss: 0.9107, acc: 99.41% / test_loss: 0.9183, test_acc: 98.65%
Epoch 302 - loss: 0.9119, acc: 99.30% / test_loss: 0.9228, test_acc: 98.20%
Epoch 303 - loss: 0.9136, acc: 99.13% / test_loss: 0.9205, test_acc: 98.40%
Epoch 304 - loss: 0.9102, acc: 99.47% / test_loss: 0.9196, test_acc: 98.53%
Epoch 305 - loss: 0.9104, acc: 99.43% / test_loss: 0.9176, test_acc: 98.72%
Epoch 306 - loss: 0.9102, acc: 99.46% / test_loss: 0.9186, test_acc: 98.60%
Epoch 307 - loss: 0.9097, acc: 99.52% / test_loss: 0.9181, test_acc: 98.66%
Epoch 308 - loss: 0.9099, acc: 99.49% / test_loss: 0.9220, test_acc: 98.29%
Epoch 309 - loss: 0.9120, acc: 99.28% / test_loss: 0.9176, test_acc: 98.72%
Epoch 310 - loss: 0.9100, acc: 99.49% / test_loss: 0.9180, test_acc: 98.71%
Epoch 311 - loss: 0.9093, acc: 99.57% / test_loss: 0.9174, test_acc: 98.74%
Epoch 312 - loss: 0.9090, acc: 99.58% / test_loss: 0.9170, test_acc: 98.78%
Epoch 313 - loss: 0.9114, acc: 99.34% / test_loss: 0.9275, test_acc: 97.70%
Epoch 314 - loss: 0.9106, acc: 99.41% / test_loss: 0.9180, test_acc: 98.68%
Epoch 315 - loss: 0.9099, acc: 99.49% / test_loss: 0.9190, test_acc: 98.59%
Epoch 316 - loss: 0.9114, acc: 99.34% / test_loss: 0.9334, test_acc: 97.17%
Epoch 317 - loss: 0.9110, acc: 99.39% / test_loss: 0.9174, test_acc: 98.74%
Epoch 318 - loss: 0.9112, acc: 99.36% / test_loss: 0.9186, test_acc: 98.59%
Epoch 319 - loss: 0.9123, acc: 99.24% / test_loss: 0.9177, test_acc: 98.71%
Epoch 320 - loss: 0.9104, acc: 99.43% / test_loss: 0.9175, test_acc: 98.72%
Epoch 321 - loss: 0.9092, acc: 99.57% / test_loss: 0.9179, test_acc: 98.68%
Epoch 322 - loss: 0.9094, acc: 99.55% / test_loss: 0.9186, test_acc: 98.62%
Epoch 323 - loss: 0.9109, acc: 99.40% / test_loss: 0.9187, test_acc: 98.60%
Epoch 324 - loss: 0.9108, acc: 99.41% / test_loss: 0.9186, test_acc: 98.60%
Epoch 325 - loss: 0.9095, acc: 99.53% / test_loss: 0.9173, test_acc: 98.75%
Epoch 326 - loss: 0.9091, acc: 99.58% / test_loss: 0.9226, test_acc: 98.19%
Epoch 327 - loss: 0.9123, acc: 99.23% / test_loss: 0.9202, test_acc: 98.43%
Epoch 328 - loss: 0.9095, acc: 99.52% / test_loss: 0.9194, test_acc: 98.51%
Epoch 329 - loss: 0.9098, acc: 99.50% / test_loss: 0.9176, test_acc: 98.72%
Epoch 330 - loss: 0.9089, acc: 99.60% / test_loss: 0.9178, test_acc: 98.68%
Epoch 331 - loss: 0.9098, acc: 99.50% / test_loss: 0.9221, test_acc: 98.25%
Epoch 332 - loss: 0.9123, acc: 99.23% / test_loss: 0.9276, test_acc: 97.69%
Epoch 333 - loss: 0.9110, acc: 99.39% / test_loss: 0.9184, test_acc: 98.63%
Epoch 334 - loss: 0.9102, acc: 99.47% / test_loss: 0.9195, test_acc: 98.51%
Epoch 335 - loss: 0.9100, acc: 99.47% / test_loss: 0.9179, test_acc: 98.70%
Epoch 336 - loss: 0.9088, acc: 99.61% / test_loss: 0.9176, test_acc: 98.72%
Epoch 337 - loss: 0.9095, acc: 99.54% / test_loss: 0.9179, test_acc: 98.66%
Epoch 338 - loss: 0.9102, acc: 99.48% / test_loss: 0.9189, test_acc: 98.60%
Epoch 339 - loss: 0.9121, acc: 99.27% / test_loss: 0.9183, test_acc: 98.65%
Epoch 340 - loss: 0.9106, acc: 99.40% / test_loss: 0.9186, test_acc: 98.63%
Epoch 341 - loss: 0.9107, acc: 99.42% / test_loss: 0.9192, test_acc: 98.54%
Epoch 342 - loss: 0.9103, acc: 99.43% / test_loss: 0.9173, test_acc: 98.77%
Epoch 343 - loss: 0.9085, acc: 99.64% / test_loss: 0.9178, test_acc: 98.71%
Epoch 344 - loss: 0.9095, acc: 99.53% / test_loss: 0.9173, test_acc: 98.74%
Epoch 345 - loss: 0.9091, acc: 99.58% / test_loss: 0.9177, test_acc: 98.70%
Epoch 346 - loss: 0.9108, acc: 99.40% / test_loss: 0.9184, test_acc: 98.65%
Epoch 347 - loss: 0.9105, acc: 99.45% / test_loss: 0.9208, test_acc: 98.42%
Epoch 348 - loss: 0.9116, acc: 99.34% / test_loss: 0.9191, test_acc: 98.54%
Epoch 349 - loss: 0.9098, acc: 99.49% / test_loss: 0.9205, test_acc: 98.44%
Epoch 350 - loss: 0.9092, acc: 99.57% / test_loss: 0.9178, test_acc: 98.66%
Epoch 351 - loss: 0.9099, acc: 99.50% / test_loss: 0.9210, test_acc: 98.38%
Epoch 352 - loss: 0.9105, acc: 99.43% / test_loss: 0.9191, test_acc: 98.60%
Epoch 353 - loss: 0.9109, acc: 99.40% / test_loss: 0.9189, test_acc: 98.61%
Epoch 354 - loss: 0.9115, acc: 99.32% / test_loss: 0.9195, test_acc: 98.54%
Epoch 355 - loss: 0.9094, acc: 99.55% / test_loss: 0.9183, test_acc: 98.66%
Epoch 356 - loss: 0.9109, acc: 99.40% / test_loss: 0.9177, test_acc: 98.73%
Epoch 357 - loss: 0.9096, acc: 99.52% / test_loss: 0.9181, test_acc: 98.64%
Epoch 358 - loss: 0.9098, acc: 99.49% / test_loss: 0.9188, test_acc: 98.61%
Epoch 359 - loss: 0.9108, acc: 99.43% / test_loss: 0.9181, test_acc: 98.69%
Epoch 360 - loss: 0.9088, acc: 99.61% / test_loss: 0.9172, test_acc: 98.78%
Epoch 361 - loss: 0.9099, acc: 99.50% / test_loss: 0.9185, test_acc: 98.61%
Epoch 362 - loss: 0.9095, acc: 99.53% / test_loss: 0.9180, test_acc: 98.69%
Epoch 363 - loss: 0.9088, acc: 99.61% / test_loss: 0.9168, test_acc: 98.80%
Epoch 364 - loss: 0.9096, acc: 99.53% / test_loss: 0.9176, test_acc: 98.69%
Epoch 365 - loss: 0.9088, acc: 99.61% / test_loss: 0.9173, test_acc: 98.76%
Epoch 366 - loss: 0.9088, acc: 99.61% / test_loss: 0.9175, test_acc: 98.75%
Epoch 367 - loss: 0.9097, acc: 99.53% / test_loss: 0.9204, test_acc: 98.46%
Epoch 368 - loss: 0.9114, acc: 99.34% / test_loss: 0.9170, test_acc: 98.78%
Epoch 369 - loss: 0.9093, acc: 99.55% / test_loss: 0.9178, test_acc: 98.68%
Epoch 370 - loss: 0.9110, acc: 99.38% / test_loss: 0.9220, test_acc: 98.26%
Epoch 371 - loss: 0.9105, acc: 99.43% / test_loss: 0.9185, test_acc: 98.64%
Epoch 372 - loss: 0.9101, acc: 99.47% / test_loss: 0.9189, test_acc: 98.60%
Epoch 373 - loss: 0.9110, acc: 99.40% / test_loss: 0.9183, test_acc: 98.66%
Epoch 374 - loss: 0.9102, acc: 99.46% / test_loss: 0.9186, test_acc: 98.64%
Epoch 375 - loss: 0.9094, acc: 99.55% / test_loss: 0.9187, test_acc: 98.60%
Epoch 376 - loss: 0.9092, acc: 99.56% / test_loss: 0.9179, test_acc: 98.69%
Epoch 377 - loss: 0.9100, acc: 99.48% / test_loss: 0.9186, test_acc: 98.60%
Epoch 378 - loss: 0.9107, acc: 99.41% / test_loss: 0.9225, test_acc: 98.23%
Epoch 379 - loss: 0.9112, acc: 99.36% / test_loss: 0.9205, test_acc: 98.45%
Epoch 380 - loss: 0.9099, acc: 99.49% / test_loss: 0.9196, test_acc: 98.53%
Epoch 381 - loss: 0.9094, acc: 99.55% / test_loss: 0.9192, test_acc: 98.57%
Epoch 382 - loss: 0.9095, acc: 99.52% / test_loss: 0.9176, test_acc: 98.73%
Epoch 383 - loss: 0.9089, acc: 99.60% / test_loss: 0.9187, test_acc: 98.60%
Epoch 384 - loss: 0.9085, acc: 99.64% / test_loss: 0.9183, test_acc: 98.64%
Epoch 385 - loss: 0.9087, acc: 99.62% / test_loss: 0.9173, test_acc: 98.73%
Epoch 386 - loss: 0.9111, acc: 99.37% / test_loss: 0.9216, test_acc: 98.29%
Epoch 387 - loss: 0.9094, acc: 99.55% / test_loss: 0.9181, test_acc: 98.66%
Epoch 388 - loss: 0.9096, acc: 99.52% / test_loss: 0.9180, test_acc: 98.66%
Epoch 389 - loss: 0.9089, acc: 99.59% / test_loss: 0.9180, test_acc: 98.68%
Epoch 390 - loss: 0.9088, acc: 99.61% / test_loss: 0.9202, test_acc: 98.46%
Epoch 391 - loss: 0.9088, acc: 99.61% / test_loss: 0.9173, test_acc: 98.75%
Epoch 392 - loss: 0.9094, acc: 99.53% / test_loss: 0.9186, test_acc: 98.60%
Epoch 393 - loss: 0.9100, acc: 99.47% / test_loss: 0.9180, test_acc: 98.68%
Epoch 394 - loss: 0.9098, acc: 99.51% / test_loss: 0.9178, test_acc: 98.67%
Epoch 395 - loss: 0.9097, acc: 99.50% / test_loss: 0.9189, test_acc: 98.57%
Epoch 396 - loss: 0.9094, acc: 99.52% / test_loss: 0.9190, test_acc: 98.59%
Epoch 397 - loss: 0.9113, acc: 99.34% / test_loss: 0.9167, test_acc: 98.81%
Epoch 398 - loss: 0.9092, acc: 99.56% / test_loss: 0.9171, test_acc: 98.76%
Epoch 399 - loss: 0.9105, acc: 99.43% / test_loss: 0.9181, test_acc: 98.69%
Epoch 400 - loss: 0.9094, acc: 99.54% / test_loss: 0.9179, test_acc: 98.69%
Best test accuracy 98.81% in epoch 397.
----------------------------------------------------------------------------------------------------
Run 5
Epoch 1 - loss: 1.3534, acc: 54.99% / test_loss: 1.2151, test_acc: 69.57%
Epoch 2 - loss: 1.1385, acc: 77.43% / test_loss: 1.0894, test_acc: 82.23%
Epoch 3 - loss: 1.0746, acc: 83.60% / test_loss: 1.0468, test_acc: 86.39%
Epoch 4 - loss: 1.0475, acc: 86.17% / test_loss: 1.0395, test_acc: 86.95%
Epoch 5 - loss: 1.0394, acc: 86.79% / test_loss: 1.0265, test_acc: 87.97%
Epoch 6 - loss: 1.0334, acc: 87.23% / test_loss: 1.0264, test_acc: 88.11%
Epoch 7 - loss: 1.0310, acc: 87.54% / test_loss: 1.0187, test_acc: 88.66%
Epoch 8 - loss: 1.0264, acc: 87.82% / test_loss: 1.0147, test_acc: 88.95%
Epoch 9 - loss: 1.0246, acc: 88.00% / test_loss: 1.0142, test_acc: 89.17%
Epoch 10 - loss: 1.0216, acc: 88.37% / test_loss: 1.0122, test_acc: 89.27%
Epoch 11 - loss: 1.0211, acc: 88.35% / test_loss: 1.0137, test_acc: 89.04%
Epoch 12 - loss: 1.0181, acc: 88.55% / test_loss: 1.0105, test_acc: 89.35%
Epoch 13 - loss: 1.0184, acc: 88.61% / test_loss: 1.0208, test_acc: 88.61%
Epoch 14 - loss: 1.0148, acc: 88.96% / test_loss: 1.0083, test_acc: 89.61%
Epoch 15 - loss: 1.0145, acc: 88.94% / test_loss: 1.0099, test_acc: 89.51%
Epoch 16 - loss: 1.0132, acc: 89.08% / test_loss: 1.0067, test_acc: 89.68%
Epoch 17 - loss: 1.0123, acc: 89.19% / test_loss: 1.0069, test_acc: 89.70%
Epoch 18 - loss: 1.0115, acc: 89.19% / test_loss: 1.0070, test_acc: 89.67%
Epoch 19 - loss: 1.0128, acc: 89.08% / test_loss: 1.0058, test_acc: 89.84%
Epoch 20 - loss: 1.0109, acc: 89.25% / test_loss: 1.0043, test_acc: 89.85%
Epoch 21 - loss: 1.0090, acc: 89.41% / test_loss: 1.0022, test_acc: 90.12%
Epoch 22 - loss: 1.0079, acc: 89.57% / test_loss: 1.0028, test_acc: 90.04%
Epoch 23 - loss: 1.0074, acc: 89.69% / test_loss: 1.0045, test_acc: 89.77%
Epoch 24 - loss: 1.0079, acc: 89.57% / test_loss: 1.0088, test_acc: 89.45%
Epoch 25 - loss: 1.0081, acc: 89.51% / test_loss: 1.0027, test_acc: 89.96%
Epoch 26 - loss: 1.0055, acc: 89.74% / test_loss: 1.0036, test_acc: 89.92%
Epoch 27 - loss: 1.0076, acc: 89.56% / test_loss: 1.0054, test_acc: 89.81%
Epoch 28 - loss: 1.0046, acc: 89.87% / test_loss: 1.0035, test_acc: 89.98%
Epoch 29 - loss: 1.0049, acc: 89.84% / test_loss: 1.0005, test_acc: 90.25%
Epoch 30 - loss: 1.0059, acc: 89.73% / test_loss: 1.0029, test_acc: 90.04%
Epoch 31 - loss: 1.0060, acc: 89.75% / test_loss: 1.0031, test_acc: 89.97%
Epoch 32 - loss: 1.0041, acc: 89.85% / test_loss: 1.0022, test_acc: 90.07%
Epoch 33 - loss: 1.0052, acc: 89.82% / test_loss: 1.0063, test_acc: 89.60%
Epoch 34 - loss: 1.0037, acc: 89.92% / test_loss: 1.0008, test_acc: 90.22%
Epoch 35 - loss: 1.0054, acc: 89.74% / test_loss: 0.9993, test_acc: 90.28%
Epoch 36 - loss: 1.0039, acc: 89.85% / test_loss: 1.0014, test_acc: 90.15%
Epoch 37 - loss: 1.0040, acc: 89.86% / test_loss: 1.0004, test_acc: 90.25%
Epoch 38 - loss: 1.0038, acc: 89.89% / test_loss: 1.0044, test_acc: 89.79%
Epoch 39 - loss: 1.0041, acc: 89.87% / test_loss: 1.0021, test_acc: 90.11%
Epoch 40 - loss: 1.0026, acc: 89.97% / test_loss: 1.0000, test_acc: 90.22%
Epoch 41 - loss: 1.0041, acc: 89.91% / test_loss: 1.0006, test_acc: 90.21%
Epoch 42 - loss: 1.0035, acc: 89.94% / test_loss: 1.0025, test_acc: 90.07%
Epoch 43 - loss: 1.0027, acc: 90.02% / test_loss: 1.0001, test_acc: 90.24%
Epoch 44 - loss: 1.0053, acc: 89.72% / test_loss: 1.0011, test_acc: 90.15%
Epoch 45 - loss: 1.0025, acc: 90.00% / test_loss: 0.9992, test_acc: 90.33%
Epoch 46 - loss: 1.0031, acc: 89.92% / test_loss: 0.9981, test_acc: 90.39%
Epoch 47 - loss: 1.0014, acc: 90.08% / test_loss: 0.9998, test_acc: 90.25%
Epoch 48 - loss: 1.0021, acc: 90.00% / test_loss: 1.0079, test_acc: 89.63%
Epoch 49 - loss: 1.0029, acc: 89.96% / test_loss: 1.0013, test_acc: 90.25%
Epoch 50 - loss: 1.0019, acc: 90.06% / test_loss: 0.9987, test_acc: 90.32%
Epoch 51 - loss: 1.0034, acc: 89.92% / test_loss: 0.9987, test_acc: 90.34%
Epoch 52 - loss: 1.0019, acc: 90.03% / test_loss: 0.9992, test_acc: 90.28%
Epoch 53 - loss: 1.0019, acc: 90.06% / test_loss: 1.0011, test_acc: 90.07%
Epoch 54 - loss: 1.0017, acc: 90.12% / test_loss: 0.9981, test_acc: 90.43%
Epoch 55 - loss: 1.0014, acc: 90.10% / test_loss: 0.9991, test_acc: 90.37%
Epoch 56 - loss: 1.0015, acc: 90.06% / test_loss: 0.9990, test_acc: 90.30%
Epoch 57 - loss: 1.0000, acc: 90.25% / test_loss: 0.9969, test_acc: 90.46%
Epoch 58 - loss: 1.0005, acc: 90.14% / test_loss: 0.9979, test_acc: 90.37%
Epoch 59 - loss: 0.9991, acc: 90.28% / test_loss: 0.9968, test_acc: 90.53%
Epoch 60 - loss: 1.0003, acc: 90.22% / test_loss: 0.9974, test_acc: 90.46%
Epoch 61 - loss: 0.9986, acc: 90.36% / test_loss: 0.9952, test_acc: 90.66%
Epoch 62 - loss: 0.9973, acc: 90.45% / test_loss: 0.9979, test_acc: 90.40%
Epoch 63 - loss: 0.9977, acc: 90.44% / test_loss: 0.9950, test_acc: 90.68%
Epoch 64 - loss: 0.9979, acc: 90.43% / test_loss: 0.9966, test_acc: 90.55%
Epoch 65 - loss: 0.9977, acc: 90.43% / test_loss: 0.9976, test_acc: 90.46%
Epoch 66 - loss: 0.9978, acc: 90.43% / test_loss: 0.9955, test_acc: 90.68%
Epoch 67 - loss: 0.9973, acc: 90.48% / test_loss: 0.9961, test_acc: 90.59%
Epoch 68 - loss: 0.9969, acc: 90.50% / test_loss: 0.9963, test_acc: 90.59%
Epoch 69 - loss: 0.9960, acc: 90.64% / test_loss: 0.9943, test_acc: 90.88%
Epoch 70 - loss: 0.9948, acc: 90.71% / test_loss: 0.9902, test_acc: 91.17%
Epoch 71 - loss: 0.9882, acc: 91.40% / test_loss: 0.9857, test_acc: 91.60%
Epoch 72 - loss: 0.9862, acc: 91.57% / test_loss: 0.9941, test_acc: 90.94%
Epoch 73 - loss: 0.9840, acc: 91.82% / test_loss: 0.9851, test_acc: 91.74%
Epoch 74 - loss: 0.9826, acc: 91.95% / test_loss: 0.9836, test_acc: 91.88%
Epoch 75 - loss: 0.9816, acc: 92.02% / test_loss: 0.9828, test_acc: 91.94%
Epoch 76 - loss: 0.9796, acc: 92.26% / test_loss: 0.9824, test_acc: 91.92%
Epoch 77 - loss: 0.9796, acc: 92.22% / test_loss: 0.9837, test_acc: 91.78%
Epoch 78 - loss: 0.9790, acc: 92.30% / test_loss: 0.9852, test_acc: 91.67%
Epoch 79 - loss: 0.9775, acc: 92.43% / test_loss: 0.9809, test_acc: 92.13%
Epoch 80 - loss: 0.9773, acc: 92.51% / test_loss: 0.9820, test_acc: 92.00%
Epoch 81 - loss: 0.9763, acc: 92.53% / test_loss: 0.9800, test_acc: 92.21%
Epoch 82 - loss: 0.9780, acc: 92.37% / test_loss: 0.9798, test_acc: 92.22%
Epoch 83 - loss: 0.9753, acc: 92.65% / test_loss: 0.9797, test_acc: 92.14%
Epoch 84 - loss: 0.9754, acc: 92.61% / test_loss: 0.9789, test_acc: 92.27%
Epoch 85 - loss: 0.9769, acc: 92.46% / test_loss: 0.9787, test_acc: 92.29%
Epoch 86 - loss: 0.9753, acc: 92.62% / test_loss: 0.9807, test_acc: 92.13%
Epoch 87 - loss: 0.9749, acc: 92.68% / test_loss: 0.9777, test_acc: 92.36%
Epoch 88 - loss: 0.9738, acc: 92.80% / test_loss: 0.9771, test_acc: 92.44%
Epoch 89 - loss: 0.9747, acc: 92.75% / test_loss: 0.9782, test_acc: 92.34%
Epoch 90 - loss: 0.9754, acc: 92.63% / test_loss: 0.9776, test_acc: 92.40%
Epoch 91 - loss: 0.9756, acc: 92.59% / test_loss: 0.9778, test_acc: 92.35%
Epoch 92 - loss: 0.9735, acc: 92.80% / test_loss: 0.9770, test_acc: 92.45%
Epoch 93 - loss: 0.9736, acc: 92.80% / test_loss: 0.9775, test_acc: 92.36%
Epoch 94 - loss: 0.9737, acc: 92.81% / test_loss: 0.9779, test_acc: 92.39%
Epoch 95 - loss: 0.9737, acc: 92.77% / test_loss: 0.9788, test_acc: 92.29%
Epoch 96 - loss: 0.9731, acc: 92.87% / test_loss: 0.9783, test_acc: 92.31%
Epoch 97 - loss: 0.9742, acc: 92.75% / test_loss: 0.9770, test_acc: 92.49%
Epoch 98 - loss: 0.9742, acc: 92.74% / test_loss: 0.9775, test_acc: 92.40%
Epoch 99 - loss: 0.9728, acc: 92.91% / test_loss: 0.9771, test_acc: 92.44%
Epoch 100 - loss: 0.9724, acc: 92.94% / test_loss: 0.9761, test_acc: 92.53%
Epoch 101 - loss: 0.9722, acc: 92.94% / test_loss: 0.9789, test_acc: 92.29%
Epoch 102 - loss: 0.9719, acc: 92.96% / test_loss: 0.9764, test_acc: 92.53%
Epoch 103 - loss: 0.9712, acc: 93.02% / test_loss: 0.9750, test_acc: 92.65%
Epoch 104 - loss: 0.9727, acc: 92.88% / test_loss: 0.9769, test_acc: 92.50%
Epoch 105 - loss: 0.9729, acc: 92.86% / test_loss: 0.9762, test_acc: 92.55%
Epoch 106 - loss: 0.9725, acc: 92.90% / test_loss: 0.9747, test_acc: 92.66%
Epoch 107 - loss: 0.9724, acc: 92.93% / test_loss: 0.9760, test_acc: 92.58%
Epoch 108 - loss: 0.9721, acc: 92.94% / test_loss: 0.9766, test_acc: 92.48%
Epoch 109 - loss: 0.9716, acc: 92.97% / test_loss: 0.9772, test_acc: 92.39%
Epoch 110 - loss: 0.9725, acc: 92.86% / test_loss: 0.9824, test_acc: 91.94%
Epoch 111 - loss: 0.9725, acc: 92.89% / test_loss: 0.9748, test_acc: 92.65%
Epoch 112 - loss: 0.9703, acc: 93.10% / test_loss: 0.9756, test_acc: 92.56%
Epoch 113 - loss: 0.9716, acc: 92.99% / test_loss: 0.9764, test_acc: 92.54%
Epoch 114 - loss: 0.9716, acc: 92.97% / test_loss: 0.9778, test_acc: 92.49%
Epoch 115 - loss: 0.9713, acc: 93.02% / test_loss: 0.9744, test_acc: 92.71%
Epoch 116 - loss: 0.9710, acc: 93.07% / test_loss: 0.9759, test_acc: 92.53%
Epoch 117 - loss: 0.9722, acc: 92.89% / test_loss: 0.9755, test_acc: 92.59%
Epoch 118 - loss: 0.9714, acc: 93.01% / test_loss: 0.9768, test_acc: 92.45%
Epoch 119 - loss: 0.9712, acc: 93.05% / test_loss: 0.9747, test_acc: 92.68%
Epoch 120 - loss: 0.9702, acc: 93.10% / test_loss: 0.9745, test_acc: 92.64%
Epoch 121 - loss: 0.9706, acc: 93.06% / test_loss: 0.9736, test_acc: 92.81%
Epoch 122 - loss: 0.9732, acc: 92.79% / test_loss: 0.9776, test_acc: 92.43%
Epoch 123 - loss: 0.9710, acc: 93.03% / test_loss: 0.9750, test_acc: 92.65%
Epoch 124 - loss: 0.9712, acc: 93.05% / test_loss: 0.9738, test_acc: 92.74%
Epoch 125 - loss: 0.9709, acc: 93.04% / test_loss: 0.9750, test_acc: 92.62%
Epoch 126 - loss: 0.9706, acc: 93.05% / test_loss: 0.9755, test_acc: 92.56%
Epoch 127 - loss: 0.9694, acc: 93.17% / test_loss: 0.9769, test_acc: 92.43%
Epoch 128 - loss: 0.9709, acc: 93.03% / test_loss: 0.9755, test_acc: 92.56%
Epoch 129 - loss: 0.9691, acc: 93.24% / test_loss: 0.9733, test_acc: 92.79%
Epoch 130 - loss: 0.9704, acc: 93.10% / test_loss: 0.9740, test_acc: 92.75%
Epoch 131 - loss: 0.9711, acc: 93.02% / test_loss: 0.9749, test_acc: 92.61%
Epoch 132 - loss: 0.9702, acc: 93.10% / test_loss: 0.9755, test_acc: 92.57%
Epoch 133 - loss: 0.9700, acc: 93.12% / test_loss: 0.9745, test_acc: 92.65%
Epoch 134 - loss: 0.9688, acc: 93.27% / test_loss: 0.9737, test_acc: 92.75%
Epoch 135 - loss: 0.9685, acc: 93.26% / test_loss: 0.9777, test_acc: 92.34%
Epoch 136 - loss: 0.9701, acc: 93.14% / test_loss: 0.9734, test_acc: 92.79%
Epoch 137 - loss: 0.9695, acc: 93.17% / test_loss: 0.9734, test_acc: 92.80%
Epoch 138 - loss: 0.9698, acc: 93.12% / test_loss: 0.9761, test_acc: 92.55%
Epoch 139 - loss: 0.9704, acc: 93.06% / test_loss: 0.9756, test_acc: 92.61%
Epoch 140 - loss: 0.9688, acc: 93.24% / test_loss: 0.9759, test_acc: 92.55%
Epoch 141 - loss: 0.9721, acc: 92.91% / test_loss: 0.9760, test_acc: 92.55%
Epoch 142 - loss: 0.9710, acc: 93.02% / test_loss: 0.9747, test_acc: 92.63%
Epoch 143 - loss: 0.9698, acc: 93.15% / test_loss: 0.9774, test_acc: 92.40%
Epoch 144 - loss: 0.9703, acc: 93.09% / test_loss: 0.9742, test_acc: 92.67%
Epoch 145 - loss: 0.9690, acc: 93.22% / test_loss: 0.9744, test_acc: 92.66%
Epoch 146 - loss: 0.9697, acc: 93.17% / test_loss: 0.9735, test_acc: 92.77%
Epoch 147 - loss: 0.9690, acc: 93.24% / test_loss: 0.9729, test_acc: 92.82%
Epoch 148 - loss: 0.9686, acc: 93.25% / test_loss: 0.9752, test_acc: 92.62%
Epoch 149 - loss: 0.9698, acc: 93.14% / test_loss: 0.9757, test_acc: 92.60%
Epoch 150 - loss: 0.9701, acc: 93.11% / test_loss: 0.9747, test_acc: 92.65%
Epoch 151 - loss: 0.9700, acc: 93.12% / test_loss: 0.9736, test_acc: 92.74%
Epoch 152 - loss: 0.9688, acc: 93.24% / test_loss: 0.9743, test_acc: 92.71%
Epoch 153 - loss: 0.9687, acc: 93.24% / test_loss: 0.9732, test_acc: 92.78%
Epoch 154 - loss: 0.9683, acc: 93.29% / test_loss: 0.9739, test_acc: 92.71%
Epoch 155 - loss: 0.9692, acc: 93.20% / test_loss: 0.9744, test_acc: 92.71%
Epoch 156 - loss: 0.9679, acc: 93.33% / test_loss: 0.9745, test_acc: 92.66%
Epoch 157 - loss: 0.9697, acc: 93.14% / test_loss: 0.9750, test_acc: 92.61%
Epoch 158 - loss: 0.9689, acc: 93.24% / test_loss: 0.9736, test_acc: 92.77%
Epoch 159 - loss: 0.9680, acc: 93.30% / test_loss: 0.9745, test_acc: 92.68%
Epoch 160 - loss: 0.9704, acc: 93.06% / test_loss: 0.9749, test_acc: 92.63%
Epoch 161 - loss: 0.9703, acc: 93.09% / test_loss: 0.9741, test_acc: 92.70%
Epoch 162 - loss: 0.9723, acc: 92.93% / test_loss: 0.9761, test_acc: 92.50%
Epoch 163 - loss: 0.9703, acc: 93.10% / test_loss: 0.9766, test_acc: 92.44%
Epoch 164 - loss: 0.9686, acc: 93.25% / test_loss: 0.9735, test_acc: 92.74%
Epoch 165 - loss: 0.9696, acc: 93.17% / test_loss: 0.9732, test_acc: 92.82%
Epoch 166 - loss: 0.9686, acc: 93.27% / test_loss: 0.9748, test_acc: 92.63%
Epoch 167 - loss: 0.9687, acc: 93.24% / test_loss: 0.9739, test_acc: 92.69%
Epoch 168 - loss: 0.9675, acc: 93.35% / test_loss: 0.9730, test_acc: 92.80%
Epoch 169 - loss: 0.9692, acc: 93.20% / test_loss: 0.9756, test_acc: 92.58%
Epoch 170 - loss: 0.9703, acc: 93.08% / test_loss: 0.9743, test_acc: 92.68%
Epoch 171 - loss: 0.9692, acc: 93.19% / test_loss: 0.9742, test_acc: 92.68%
Epoch 172 - loss: 0.9687, acc: 93.22% / test_loss: 0.9791, test_acc: 92.23%
Epoch 173 - loss: 0.9694, acc: 93.17% / test_loss: 0.9737, test_acc: 92.74%
Epoch 174 - loss: 0.9687, acc: 93.23% / test_loss: 0.9737, test_acc: 92.74%
Epoch 175 - loss: 0.9693, acc: 93.16% / test_loss: 0.9765, test_acc: 92.49%
Epoch 176 - loss: 0.9702, acc: 93.07% / test_loss: 0.9745, test_acc: 92.67%
Epoch 177 - loss: 0.9688, acc: 93.24% / test_loss: 0.9747, test_acc: 92.65%
Epoch 178 - loss: 0.9694, acc: 93.16% / test_loss: 0.9738, test_acc: 92.73%
Epoch 179 - loss: 0.9683, acc: 93.27% / test_loss: 0.9748, test_acc: 92.59%
Epoch 180 - loss: 0.9683, acc: 93.27% / test_loss: 0.9724, test_acc: 92.85%
Epoch 181 - loss: 0.9688, acc: 93.21% / test_loss: 0.9761, test_acc: 92.50%
Epoch 182 - loss: 0.9700, acc: 93.13% / test_loss: 0.9746, test_acc: 92.68%
Epoch 183 - loss: 0.9685, acc: 93.25% / test_loss: 0.9745, test_acc: 92.69%
Epoch 184 - loss: 0.9689, acc: 93.21% / test_loss: 0.9728, test_acc: 92.84%
Epoch 185 - loss: 0.9674, acc: 93.36% / test_loss: 0.9729, test_acc: 92.79%
Epoch 186 - loss: 0.9696, acc: 93.14% / test_loss: 0.9841, test_acc: 91.72%
Epoch 187 - loss: 0.9709, acc: 92.99% / test_loss: 0.9732, test_acc: 92.78%
Epoch 188 - loss: 0.9692, acc: 93.20% / test_loss: 0.9726, test_acc: 92.83%
Epoch 189 - loss: 0.9690, acc: 93.20% / test_loss: 0.9732, test_acc: 92.81%
Epoch 190 - loss: 0.9676, acc: 93.33% / test_loss: 0.9736, test_acc: 92.76%
Epoch 191 - loss: 0.9688, acc: 93.21% / test_loss: 0.9730, test_acc: 92.81%
Epoch 192 - loss: 0.9683, acc: 93.29% / test_loss: 0.9735, test_acc: 92.75%
Epoch 193 - loss: 0.9695, acc: 93.14% / test_loss: 0.9744, test_acc: 92.68%
Epoch 194 - loss: 0.9688, acc: 93.23% / test_loss: 0.9761, test_acc: 92.50%
Epoch 195 - loss: 0.9682, acc: 93.28% / test_loss: 0.9727, test_acc: 92.82%
Epoch 196 - loss: 0.9672, acc: 93.38% / test_loss: 0.9720, test_acc: 92.92%
Epoch 197 - loss: 0.9678, acc: 93.32% / test_loss: 0.9742, test_acc: 92.74%
Epoch 198 - loss: 0.9692, acc: 93.19% / test_loss: 0.9762, test_acc: 92.49%
Epoch 199 - loss: 0.9692, acc: 93.19% / test_loss: 0.9738, test_acc: 92.72%
Epoch 200 - loss: 0.9675, acc: 93.35% / test_loss: 0.9735, test_acc: 92.71%
Epoch 201 - loss: 0.9689, acc: 93.20% / test_loss: 0.9739, test_acc: 92.71%
Epoch 202 - loss: 0.9695, acc: 93.17% / test_loss: 0.9738, test_acc: 92.71%
Epoch 203 - loss: 0.9694, acc: 93.16% / test_loss: 0.9761, test_acc: 92.55%
Epoch 204 - loss: 0.9680, acc: 93.32% / test_loss: 0.9713, test_acc: 92.97%
Epoch 205 - loss: 0.9671, acc: 93.40% / test_loss: 0.9708, test_acc: 93.04%
Epoch 206 - loss: 0.9659, acc: 93.51% / test_loss: 0.9717, test_acc: 92.94%
Epoch 207 - loss: 0.9667, acc: 93.42% / test_loss: 0.9716, test_acc: 92.94%
Epoch 208 - loss: 0.9678, acc: 93.33% / test_loss: 0.9700, test_acc: 93.08%
Epoch 209 - loss: 0.9665, acc: 93.45% / test_loss: 0.9718, test_acc: 92.94%
Epoch 210 - loss: 0.9659, acc: 93.51% / test_loss: 0.9767, test_acc: 92.47%
Epoch 211 - loss: 0.9662, acc: 93.51% / test_loss: 0.9713, test_acc: 92.99%
Epoch 212 - loss: 0.9662, acc: 93.51% / test_loss: 0.9722, test_acc: 92.89%
Epoch 213 - loss: 0.9666, acc: 93.44% / test_loss: 0.9700, test_acc: 93.12%
Epoch 214 - loss: 0.9660, acc: 93.51% / test_loss: 0.9777, test_acc: 92.32%
Epoch 215 - loss: 0.9689, acc: 93.24% / test_loss: 0.9728, test_acc: 92.89%
Epoch 216 - loss: 0.9676, acc: 93.37% / test_loss: 0.9698, test_acc: 93.14%
Epoch 217 - loss: 0.9651, acc: 93.60% / test_loss: 0.9699, test_acc: 93.14%
Epoch 218 - loss: 0.9653, acc: 93.57% / test_loss: 0.9695, test_acc: 93.17%
Epoch 219 - loss: 0.9678, acc: 93.33% / test_loss: 0.9702, test_acc: 93.11%
Epoch 220 - loss: 0.9666, acc: 93.44% / test_loss: 0.9721, test_acc: 92.88%
Epoch 221 - loss: 0.9658, acc: 93.51% / test_loss: 0.9705, test_acc: 93.09%
Epoch 222 - loss: 0.9666, acc: 93.45% / test_loss: 0.9711, test_acc: 92.98%
Epoch 223 - loss: 0.9659, acc: 93.51% / test_loss: 0.9715, test_acc: 92.94%
Epoch 224 - loss: 0.9657, acc: 93.53% / test_loss: 0.9702, test_acc: 93.08%
Epoch 225 - loss: 0.9652, acc: 93.58% / test_loss: 0.9694, test_acc: 93.19%
Epoch 226 - loss: 0.9660, acc: 93.50% / test_loss: 0.9717, test_acc: 92.91%
Epoch 227 - loss: 0.9667, acc: 93.44% / test_loss: 0.9730, test_acc: 92.83%
Epoch 228 - loss: 0.9665, acc: 93.46% / test_loss: 0.9699, test_acc: 93.10%
Epoch 229 - loss: 0.9656, acc: 93.56% / test_loss: 0.9714, test_acc: 92.96%
Epoch 230 - loss: 0.9651, acc: 93.60% / test_loss: 0.9692, test_acc: 93.19%
Epoch 231 - loss: 0.9638, acc: 93.73% / test_loss: 0.9693, test_acc: 93.15%
Epoch 232 - loss: 0.9658, acc: 93.53% / test_loss: 0.9701, test_acc: 93.09%
Epoch 233 - loss: 0.9658, acc: 93.52% / test_loss: 0.9706, test_acc: 93.07%
Epoch 234 - loss: 0.9648, acc: 93.61% / test_loss: 0.9686, test_acc: 93.27%
Epoch 235 - loss: 0.9640, acc: 93.69% / test_loss: 0.9690, test_acc: 93.20%
Epoch 236 - loss: 0.9652, acc: 93.57% / test_loss: 0.9714, test_acc: 93.00%
Epoch 237 - loss: 0.9643, acc: 93.68% / test_loss: 0.9693, test_acc: 93.17%
Epoch 238 - loss: 0.9657, acc: 93.54% / test_loss: 0.9697, test_acc: 93.12%
Epoch 239 - loss: 0.9642, acc: 93.67% / test_loss: 0.9693, test_acc: 93.19%
Epoch 240 - loss: 0.9639, acc: 93.70% / test_loss: 0.9707, test_acc: 93.05%
Epoch 241 - loss: 0.9677, acc: 93.35% / test_loss: 0.9704, test_acc: 93.08%
Epoch 242 - loss: 0.9648, acc: 93.61% / test_loss: 0.9719, test_acc: 92.94%
Epoch 243 - loss: 0.9655, acc: 93.56% / test_loss: 0.9695, test_acc: 93.19%
Epoch 244 - loss: 0.9647, acc: 93.66% / test_loss: 0.9690, test_acc: 93.22%
Epoch 245 - loss: 0.9651, acc: 93.60% / test_loss: 0.9702, test_acc: 93.08%
Epoch 246 - loss: 0.9649, acc: 93.61% / test_loss: 0.9708, test_acc: 93.01%
Epoch 247 - loss: 0.9643, acc: 93.68% / test_loss: 0.9684, test_acc: 93.30%
Epoch 248 - loss: 0.9639, acc: 93.71% / test_loss: 0.9695, test_acc: 93.18%
Epoch 249 - loss: 0.9640, acc: 93.70% / test_loss: 0.9704, test_acc: 93.06%
Epoch 250 - loss: 0.9651, acc: 93.57% / test_loss: 0.9694, test_acc: 93.20%
Epoch 251 - loss: 0.9667, acc: 93.42% / test_loss: 0.9732, test_acc: 92.75%
Epoch 252 - loss: 0.9645, acc: 93.66% / test_loss: 0.9696, test_acc: 93.15%
Epoch 253 - loss: 0.9640, acc: 93.71% / test_loss: 0.9708, test_acc: 93.01%
Epoch 254 - loss: 0.9651, acc: 93.59% / test_loss: 0.9679, test_acc: 93.29%
Epoch 255 - loss: 0.9643, acc: 93.67% / test_loss: 0.9690, test_acc: 93.19%
Epoch 256 - loss: 0.9673, acc: 93.37% / test_loss: 0.9693, test_acc: 93.18%
Epoch 257 - loss: 0.9644, acc: 93.67% / test_loss: 0.9705, test_acc: 93.03%
Epoch 258 - loss: 0.9654, acc: 93.54% / test_loss: 0.9694, test_acc: 93.17%
Epoch 259 - loss: 0.9642, acc: 93.67% / test_loss: 0.9681, test_acc: 93.29%
Epoch 260 - loss: 0.9638, acc: 93.72% / test_loss: 0.9692, test_acc: 93.19%
Epoch 261 - loss: 0.9648, acc: 93.64% / test_loss: 0.9692, test_acc: 93.17%
Epoch 262 - loss: 0.9640, acc: 93.71% / test_loss: 0.9724, test_acc: 92.82%
Epoch 263 - loss: 0.9650, acc: 93.59% / test_loss: 0.9705, test_acc: 93.08%
Epoch 264 - loss: 0.9638, acc: 93.72% / test_loss: 0.9687, test_acc: 93.24%
Epoch 265 - loss: 0.9631, acc: 93.79% / test_loss: 0.9680, test_acc: 93.31%
Epoch 266 - loss: 0.9631, acc: 93.79% / test_loss: 0.9685, test_acc: 93.22%
Epoch 267 - loss: 0.9640, acc: 93.71% / test_loss: 0.9688, test_acc: 93.22%
Epoch 268 - loss: 0.9641, acc: 93.69% / test_loss: 0.9694, test_acc: 93.15%
Epoch 269 - loss: 0.9639, acc: 93.70% / test_loss: 0.9684, test_acc: 93.27%
Epoch 270 - loss: 0.9646, acc: 93.64% / test_loss: 0.9700, test_acc: 93.12%
Epoch 271 - loss: 0.9641, acc: 93.70% / test_loss: 0.9689, test_acc: 93.26%
Epoch 272 - loss: 0.9639, acc: 93.71% / test_loss: 0.9682, test_acc: 93.31%
Epoch 273 - loss: 0.9646, acc: 93.65% / test_loss: 0.9706, test_acc: 93.11%
Epoch 274 - loss: 0.9654, acc: 93.56% / test_loss: 0.9705, test_acc: 93.04%
Epoch 275 - loss: 0.9644, acc: 93.66% / test_loss: 0.9700, test_acc: 93.10%
Epoch 276 - loss: 0.9641, acc: 93.70% / test_loss: 0.9716, test_acc: 92.97%
Epoch 277 - loss: 0.9653, acc: 93.56% / test_loss: 0.9687, test_acc: 93.24%
Epoch 278 - loss: 0.9632, acc: 93.78% / test_loss: 0.9681, test_acc: 93.31%
Epoch 279 - loss: 0.9634, acc: 93.76% / test_loss: 0.9687, test_acc: 93.25%
Epoch 280 - loss: 0.9638, acc: 93.74% / test_loss: 0.9691, test_acc: 93.20%
Epoch 281 - loss: 0.9650, acc: 93.60% / test_loss: 0.9685, test_acc: 93.26%
Epoch 282 - loss: 0.9643, acc: 93.67% / test_loss: 0.9686, test_acc: 93.25%
Epoch 283 - loss: 0.9630, acc: 93.80% / test_loss: 0.9685, test_acc: 93.25%
Epoch 284 - loss: 0.9638, acc: 93.73% / test_loss: 0.9694, test_acc: 93.18%
Epoch 285 - loss: 0.9642, acc: 93.69% / test_loss: 0.9681, test_acc: 93.31%
Epoch 286 - loss: 0.9632, acc: 93.77% / test_loss: 0.9698, test_acc: 93.15%
Epoch 287 - loss: 0.9633, acc: 93.76% / test_loss: 0.9707, test_acc: 93.05%
Epoch 288 - loss: 0.9631, acc: 93.79% / test_loss: 0.9703, test_acc: 93.08%
Epoch 289 - loss: 0.9637, acc: 93.74% / test_loss: 0.9697, test_acc: 93.15%
Epoch 290 - loss: 0.9636, acc: 93.75% / test_loss: 0.9690, test_acc: 93.20%
Epoch 291 - loss: 0.9644, acc: 93.64% / test_loss: 0.9692, test_acc: 93.21%
Epoch 292 - loss: 0.9629, acc: 93.82% / test_loss: 0.9701, test_acc: 93.08%
Epoch 293 - loss: 0.9629, acc: 93.79% / test_loss: 0.9696, test_acc: 93.14%
Epoch 294 - loss: 0.9630, acc: 93.79% / test_loss: 0.9674, test_acc: 93.38%
Epoch 295 - loss: 0.9641, acc: 93.68% / test_loss: 0.9684, test_acc: 93.24%
Epoch 296 - loss: 0.9619, acc: 93.89% / test_loss: 0.9676, test_acc: 93.33%
Epoch 297 - loss: 0.9624, acc: 93.88% / test_loss: 0.9688, test_acc: 93.23%
Epoch 298 - loss: 0.9623, acc: 93.85% / test_loss: 0.9678, test_acc: 93.27%
Epoch 299 - loss: 0.9625, acc: 93.85% / test_loss: 0.9676, test_acc: 93.33%
Epoch 300 - loss: 0.9628, acc: 93.83% / test_loss: 0.9682, test_acc: 93.27%
Epoch 301 - loss: 0.9636, acc: 93.74% / test_loss: 0.9724, test_acc: 92.90%
Epoch 302 - loss: 0.9644, acc: 93.66% / test_loss: 0.9682, test_acc: 93.27%
Epoch 303 - loss: 0.9631, acc: 93.78% / test_loss: 0.9686, test_acc: 93.27%
Epoch 304 - loss: 0.9639, acc: 93.74% / test_loss: 0.9718, test_acc: 92.92%
Epoch 305 - loss: 0.9627, acc: 93.83% / test_loss: 0.9680, test_acc: 93.30%
Epoch 306 - loss: 0.9627, acc: 93.82% / test_loss: 0.9684, test_acc: 93.24%
Epoch 307 - loss: 0.9622, acc: 93.87% / test_loss: 0.9676, test_acc: 93.34%
Epoch 308 - loss: 0.9629, acc: 93.81% / test_loss: 0.9675, test_acc: 93.36%
Epoch 309 - loss: 0.9621, acc: 93.88% / test_loss: 0.9674, test_acc: 93.34%
Epoch 310 - loss: 0.9623, acc: 93.86% / test_loss: 0.9676, test_acc: 93.33%
Epoch 311 - loss: 0.9618, acc: 93.91% / test_loss: 0.9681, test_acc: 93.32%
Epoch 312 - loss: 0.9629, acc: 93.82% / test_loss: 0.9720, test_acc: 92.90%
Epoch 313 - loss: 0.9637, acc: 93.73% / test_loss: 0.9701, test_acc: 93.09%
Epoch 314 - loss: 0.9628, acc: 93.79% / test_loss: 0.9682, test_acc: 93.27%
Epoch 315 - loss: 0.9635, acc: 93.75% / test_loss: 0.9688, test_acc: 93.23%
Epoch 316 - loss: 0.9464, acc: 95.61% / test_loss: 0.9273, test_acc: 97.75%
Epoch 317 - loss: 0.9206, acc: 98.38% / test_loss: 0.9283, test_acc: 97.67%
Epoch 318 - loss: 0.9195, acc: 98.51% / test_loss: 0.9282, test_acc: 97.70%
Epoch 319 - loss: 0.9188, acc: 98.59% / test_loss: 0.9239, test_acc: 98.09%
Epoch 320 - loss: 0.9181, acc: 98.66% / test_loss: 0.9244, test_acc: 98.01%
Epoch 321 - loss: 0.9196, acc: 98.51% / test_loss: 0.9269, test_acc: 97.77%
Epoch 322 - loss: 0.9190, acc: 98.56% / test_loss: 0.9248, test_acc: 97.99%
Epoch 323 - loss: 0.9181, acc: 98.66% / test_loss: 0.9244, test_acc: 98.02%
Epoch 324 - loss: 0.9179, acc: 98.70% / test_loss: 0.9264, test_acc: 97.86%
Epoch 325 - loss: 0.9181, acc: 98.69% / test_loss: 0.9242, test_acc: 98.05%
Epoch 326 - loss: 0.9156, acc: 98.95% / test_loss: 0.9225, test_acc: 98.24%
Epoch 327 - loss: 0.9153, acc: 98.98% / test_loss: 0.9255, test_acc: 97.95%
Epoch 328 - loss: 0.9148, acc: 99.02% / test_loss: 0.9233, test_acc: 98.18%
Epoch 329 - loss: 0.9153, acc: 98.94% / test_loss: 0.9234, test_acc: 98.14%
Epoch 330 - loss: 0.9168, acc: 98.78% / test_loss: 0.9227, test_acc: 98.20%
Epoch 331 - loss: 0.9153, acc: 98.96% / test_loss: 0.9267, test_acc: 97.79%
Epoch 332 - loss: 0.9151, acc: 98.99% / test_loss: 0.9219, test_acc: 98.29%
Epoch 333 - loss: 0.9144, acc: 99.06% / test_loss: 0.9222, test_acc: 98.22%
Epoch 334 - loss: 0.9135, acc: 99.14% / test_loss: 0.9215, test_acc: 98.35%
Epoch 335 - loss: 0.9140, acc: 99.09% / test_loss: 0.9228, test_acc: 98.23%
Epoch 336 - loss: 0.9145, acc: 99.05% / test_loss: 0.9223, test_acc: 98.28%
Epoch 337 - loss: 0.9158, acc: 98.93% / test_loss: 0.9222, test_acc: 98.25%
Epoch 338 - loss: 0.9145, acc: 99.03% / test_loss: 0.9272, test_acc: 97.78%
Epoch 339 - loss: 0.9157, acc: 98.91% / test_loss: 0.9228, test_acc: 98.17%
Epoch 340 - loss: 0.9142, acc: 99.09% / test_loss: 0.9219, test_acc: 98.31%
Epoch 341 - loss: 0.9141, acc: 99.08% / test_loss: 0.9218, test_acc: 98.30%
Epoch 342 - loss: 0.9141, acc: 99.06% / test_loss: 0.9225, test_acc: 98.25%
Epoch 343 - loss: 0.9143, acc: 99.06% / test_loss: 0.9218, test_acc: 98.30%
Epoch 344 - loss: 0.9145, acc: 99.05% / test_loss: 0.9299, test_acc: 97.46%
Epoch 345 - loss: 0.9151, acc: 98.97% / test_loss: 0.9228, test_acc: 98.23%
Epoch 346 - loss: 0.9146, acc: 99.03% / test_loss: 0.9250, test_acc: 98.01%
Epoch 347 - loss: 0.9142, acc: 99.09% / test_loss: 0.9232, test_acc: 98.20%
Epoch 348 - loss: 0.9144, acc: 99.05% / test_loss: 0.9225, test_acc: 98.23%
Epoch 349 - loss: 0.9169, acc: 98.79% / test_loss: 0.9226, test_acc: 98.20%
Epoch 350 - loss: 0.9149, acc: 98.99% / test_loss: 0.9230, test_acc: 98.17%
Epoch 351 - loss: 0.9151, acc: 98.98% / test_loss: 0.9244, test_acc: 98.03%
Epoch 352 - loss: 0.9141, acc: 99.09% / test_loss: 0.9222, test_acc: 98.26%
Epoch 353 - loss: 0.9148, acc: 99.00% / test_loss: 0.9240, test_acc: 98.09%
Epoch 354 - loss: 0.9139, acc: 99.09% / test_loss: 0.9221, test_acc: 98.24%
Epoch 355 - loss: 0.9148, acc: 99.02% / test_loss: 0.9229, test_acc: 98.19%
Epoch 356 - loss: 0.9160, acc: 98.87% / test_loss: 0.9265, test_acc: 97.80%
Epoch 357 - loss: 0.9129, acc: 99.20% / test_loss: 0.9228, test_acc: 98.20%
Epoch 358 - loss: 0.9157, acc: 98.91% / test_loss: 0.9230, test_acc: 98.19%
Epoch 359 - loss: 0.9142, acc: 99.05% / test_loss: 0.9223, test_acc: 98.25%
Epoch 360 - loss: 0.9140, acc: 99.10% / test_loss: 0.9224, test_acc: 98.24%
Epoch 361 - loss: 0.9137, acc: 99.11% / test_loss: 0.9238, test_acc: 98.09%
Epoch 362 - loss: 0.9144, acc: 99.06% / test_loss: 0.9228, test_acc: 98.23%
Epoch 363 - loss: 0.9147, acc: 99.01% / test_loss: 0.9236, test_acc: 98.11%
Epoch 364 - loss: 0.9143, acc: 99.06% / test_loss: 0.9232, test_acc: 98.15%
Epoch 365 - loss: 0.9146, acc: 99.03% / test_loss: 0.9220, test_acc: 98.29%
Epoch 366 - loss: 0.9140, acc: 99.09% / test_loss: 0.9226, test_acc: 98.20%
Epoch 367 - loss: 0.9135, acc: 99.14% / test_loss: 0.9224, test_acc: 98.23%
Epoch 368 - loss: 0.9147, acc: 99.02% / test_loss: 0.9227, test_acc: 98.20%
Epoch 369 - loss: 0.9153, acc: 98.94% / test_loss: 0.9227, test_acc: 98.20%
Epoch 370 - loss: 0.9153, acc: 98.96% / test_loss: 0.9225, test_acc: 98.23%
Epoch 371 - loss: 0.9142, acc: 99.06% / test_loss: 0.9228, test_acc: 98.17%
Epoch 372 - loss: 0.9133, acc: 99.15% / test_loss: 0.9224, test_acc: 98.24%
Epoch 373 - loss: 0.9131, acc: 99.17% / test_loss: 0.9224, test_acc: 98.26%
Epoch 374 - loss: 0.9130, acc: 99.19% / test_loss: 0.9212, test_acc: 98.35%
Epoch 375 - loss: 0.9131, acc: 99.18% / test_loss: 0.9218, test_acc: 98.29%
Epoch 376 - loss: 0.9154, acc: 98.94% / test_loss: 0.9251, test_acc: 97.98%
Epoch 377 - loss: 0.9143, acc: 99.06% / test_loss: 0.9240, test_acc: 98.07%
Epoch 378 - loss: 0.9143, acc: 99.05% / test_loss: 0.9245, test_acc: 98.01%
Epoch 379 - loss: 0.9143, acc: 99.06% / test_loss: 0.9219, test_acc: 98.31%
Epoch 380 - loss: 0.9137, acc: 99.12% / test_loss: 0.9222, test_acc: 98.26%
Epoch 381 - loss: 0.9134, acc: 99.15% / test_loss: 0.9223, test_acc: 98.23%
Epoch 382 - loss: 0.9146, acc: 99.05% / test_loss: 0.9247, test_acc: 98.03%
Epoch 383 - loss: 0.9145, acc: 99.03% / test_loss: 0.9227, test_acc: 98.20%
Epoch 384 - loss: 0.9147, acc: 99.01% / test_loss: 0.9257, test_acc: 97.89%
Epoch 385 - loss: 0.9143, acc: 99.06% / test_loss: 0.9227, test_acc: 98.21%
Epoch 386 - loss: 0.9130, acc: 99.18% / test_loss: 0.9210, test_acc: 98.37%
Epoch 387 - loss: 0.9129, acc: 99.19% / test_loss: 0.9211, test_acc: 98.36%
Epoch 388 - loss: 0.9146, acc: 99.01% / test_loss: 0.9240, test_acc: 98.07%
Epoch 389 - loss: 0.9137, acc: 99.12% / test_loss: 0.9222, test_acc: 98.26%
Epoch 390 - loss: 0.9143, acc: 99.06% / test_loss: 0.9219, test_acc: 98.33%
Epoch 391 - loss: 0.9133, acc: 99.16% / test_loss: 0.9212, test_acc: 98.35%
Epoch 392 - loss: 0.9136, acc: 99.12% / test_loss: 0.9214, test_acc: 98.35%
Epoch 393 - loss: 0.9136, acc: 99.12% / test_loss: 0.9255, test_acc: 97.92%
Epoch 394 - loss: 0.9164, acc: 98.84% / test_loss: 0.9231, test_acc: 98.17%
Epoch 395 - loss: 0.9142, acc: 99.06% / test_loss: 0.9230, test_acc: 98.15%
Epoch 396 - loss: 0.9149, acc: 99.00% / test_loss: 0.9223, test_acc: 98.25%
Epoch 397 - loss: 0.9139, acc: 99.08% / test_loss: 0.9216, test_acc: 98.32%
Epoch 398 - loss: 0.9142, acc: 99.06% / test_loss: 0.9225, test_acc: 98.24%
Epoch 399 - loss: 0.9135, acc: 99.15% / test_loss: 0.9227, test_acc: 98.20%
Epoch 400 - loss: 0.9143, acc: 99.05% / test_loss: 0.9241, test_acc: 98.07%
Best test accuracy 98.37% in epoch 386.
----------------------------------------------------------------------------------------------------
Run 6
Epoch 1 - loss: 1.3549, acc: 55.08% / test_loss: 1.2234, test_acc: 68.43%
Epoch 2 - loss: 1.1673, acc: 74.80% / test_loss: 1.0819, test_acc: 84.23%
Epoch 3 - loss: 1.0700, acc: 84.26% / test_loss: 1.0454, test_acc: 86.61%
Epoch 4 - loss: 1.0498, acc: 85.96% / test_loss: 1.0345, test_acc: 87.38%
Epoch 5 - loss: 1.0409, acc: 86.70% / test_loss: 1.0311, test_acc: 87.45%
Epoch 6 - loss: 1.0371, acc: 86.92% / test_loss: 1.0354, test_acc: 87.22%
Epoch 7 - loss: 1.0322, acc: 87.39% / test_loss: 1.0206, test_acc: 88.57%
Epoch 8 - loss: 1.0302, acc: 87.53% / test_loss: 1.0215, test_acc: 88.35%
Epoch 9 - loss: 1.0267, acc: 87.87% / test_loss: 1.0254, test_acc: 88.21%
Epoch 10 - loss: 1.0262, acc: 87.85% / test_loss: 1.0164, test_acc: 88.86%
Epoch 11 - loss: 1.0233, acc: 88.16% / test_loss: 1.0173, test_acc: 88.79%
Epoch 12 - loss: 1.0201, acc: 88.48% / test_loss: 1.0159, test_acc: 88.83%
Epoch 13 - loss: 1.0203, acc: 88.41% / test_loss: 1.0117, test_acc: 89.29%
Epoch 14 - loss: 1.0198, acc: 88.43% / test_loss: 1.0096, test_acc: 89.38%
Epoch 15 - loss: 1.0170, acc: 88.73% / test_loss: 1.0086, test_acc: 89.52%
Epoch 16 - loss: 1.0174, acc: 88.67% / test_loss: 1.0095, test_acc: 89.43%
Epoch 17 - loss: 1.0148, acc: 88.84% / test_loss: 1.0095, test_acc: 89.41%
Epoch 18 - loss: 1.0141, acc: 88.95% / test_loss: 1.0125, test_acc: 89.14%
Epoch 19 - loss: 1.0132, acc: 89.01% / test_loss: 1.0063, test_acc: 89.79%
Epoch 20 - loss: 1.0112, acc: 89.27% / test_loss: 1.0090, test_acc: 89.59%
Epoch 21 - loss: 1.0106, acc: 89.31% / test_loss: 1.0116, test_acc: 89.27%
Epoch 22 - loss: 1.0122, acc: 89.16% / test_loss: 1.0056, test_acc: 89.74%
Epoch 23 - loss: 1.0090, acc: 89.40% / test_loss: 1.0049, test_acc: 89.85%
Epoch 24 - loss: 1.0083, acc: 89.54% / test_loss: 1.0036, test_acc: 90.01%
Epoch 25 - loss: 1.0077, acc: 89.59% / test_loss: 1.0048, test_acc: 89.78%
Epoch 26 - loss: 1.0067, acc: 89.65% / test_loss: 1.0023, test_acc: 90.02%
Epoch 27 - loss: 1.0075, acc: 89.54% / test_loss: 1.0023, test_acc: 90.09%
Epoch 28 - loss: 1.0073, acc: 89.59% / test_loss: 1.0041, test_acc: 89.92%
Epoch 29 - loss: 1.0066, acc: 89.67% / test_loss: 1.0014, test_acc: 90.15%
Epoch 30 - loss: 1.0049, acc: 89.84% / test_loss: 1.0023, test_acc: 90.02%
Epoch 31 - loss: 1.0062, acc: 89.71% / test_loss: 1.0014, test_acc: 90.15%
Epoch 32 - loss: 1.0053, acc: 89.76% / test_loss: 1.0024, test_acc: 90.07%
Epoch 33 - loss: 1.0067, acc: 89.64% / test_loss: 1.0028, test_acc: 90.10%
Epoch 34 - loss: 1.0048, acc: 89.84% / test_loss: 1.0002, test_acc: 90.24%
Epoch 35 - loss: 1.0060, acc: 89.72% / test_loss: 1.0020, test_acc: 90.16%
Epoch 36 - loss: 1.0044, acc: 89.83% / test_loss: 0.9994, test_acc: 90.36%
Epoch 37 - loss: 1.0047, acc: 89.82% / test_loss: 1.0075, test_acc: 89.60%
Epoch 38 - loss: 1.0054, acc: 89.75% / test_loss: 1.0020, test_acc: 90.12%
Epoch 39 - loss: 1.0033, acc: 89.97% / test_loss: 1.0025, test_acc: 90.04%
Epoch 40 - loss: 1.0026, acc: 90.03% / test_loss: 1.0005, test_acc: 90.20%
Epoch 41 - loss: 1.0044, acc: 89.86% / test_loss: 1.0002, test_acc: 90.22%
Epoch 42 - loss: 1.0029, acc: 89.98% / test_loss: 1.0015, test_acc: 90.12%
Epoch 43 - loss: 1.0016, acc: 90.12% / test_loss: 0.9982, test_acc: 90.40%
Epoch 44 - loss: 1.0023, acc: 90.00% / test_loss: 0.9990, test_acc: 90.36%
Epoch 45 - loss: 1.0041, acc: 89.82% / test_loss: 1.0007, test_acc: 90.32%
Epoch 46 - loss: 1.0022, acc: 90.04% / test_loss: 0.9992, test_acc: 90.37%
Epoch 47 - loss: 1.0022, acc: 90.03% / test_loss: 0.9972, test_acc: 90.48%
Epoch 48 - loss: 1.0004, acc: 90.20% / test_loss: 0.9978, test_acc: 90.43%
Epoch 49 - loss: 1.0015, acc: 90.10% / test_loss: 0.9989, test_acc: 90.30%
Epoch 50 - loss: 1.0018, acc: 90.01% / test_loss: 0.9978, test_acc: 90.47%
Epoch 51 - loss: 0.9997, acc: 90.27% / test_loss: 0.9968, test_acc: 90.52%
Epoch 52 - loss: 1.0003, acc: 90.21% / test_loss: 0.9993, test_acc: 90.28%
Epoch 53 - loss: 1.0017, acc: 90.08% / test_loss: 1.0043, test_acc: 89.88%
Epoch 54 - loss: 1.0014, acc: 90.10% / test_loss: 0.9972, test_acc: 90.49%
Epoch 55 - loss: 0.9996, acc: 90.25% / test_loss: 0.9971, test_acc: 90.52%
Epoch 56 - loss: 1.0001, acc: 90.25% / test_loss: 0.9990, test_acc: 90.37%
Epoch 57 - loss: 1.0000, acc: 90.22% / test_loss: 0.9995, test_acc: 90.34%
Epoch 58 - loss: 0.9995, acc: 90.28% / test_loss: 1.0024, test_acc: 90.06%
Epoch 59 - loss: 0.9992, acc: 90.28% / test_loss: 0.9974, test_acc: 90.52%
Epoch 60 - loss: 0.9979, acc: 90.43% / test_loss: 0.9969, test_acc: 90.57%
Epoch 61 - loss: 0.9978, acc: 90.43% / test_loss: 0.9959, test_acc: 90.62%
Epoch 62 - loss: 0.9972, acc: 90.49% / test_loss: 0.9965, test_acc: 90.65%
Epoch 63 - loss: 0.9980, acc: 90.40% / test_loss: 0.9957, test_acc: 90.67%
Epoch 64 - loss: 0.9986, acc: 90.36% / test_loss: 0.9959, test_acc: 90.62%
Epoch 65 - loss: 0.9973, acc: 90.47% / test_loss: 0.9952, test_acc: 90.63%
Epoch 66 - loss: 0.9964, acc: 90.53% / test_loss: 0.9935, test_acc: 90.86%
Epoch 67 - loss: 0.9968, acc: 90.55% / test_loss: 0.9939, test_acc: 90.80%
Epoch 68 - loss: 0.9960, acc: 90.61% / test_loss: 0.9969, test_acc: 90.51%
Epoch 69 - loss: 0.9956, acc: 90.63% / test_loss: 0.9955, test_acc: 90.65%
Epoch 70 - loss: 0.9957, acc: 90.65% / test_loss: 0.9933, test_acc: 90.86%
Epoch 71 - loss: 0.9950, acc: 90.65% / test_loss: 0.9965, test_acc: 90.66%
Epoch 72 - loss: 0.9970, acc: 90.52% / test_loss: 0.9940, test_acc: 90.79%
Epoch 73 - loss: 0.9947, acc: 90.74% / test_loss: 0.9956, test_acc: 90.69%
Epoch 74 - loss: 0.9958, acc: 90.65% / test_loss: 0.9931, test_acc: 90.89%
Epoch 75 - loss: 0.9931, acc: 90.86% / test_loss: 0.9935, test_acc: 90.81%
Epoch 76 - loss: 0.9948, acc: 90.71% / test_loss: 0.9940, test_acc: 90.83%
Epoch 77 - loss: 0.9948, acc: 90.71% / test_loss: 0.9926, test_acc: 90.88%
Epoch 78 - loss: 0.9921, acc: 90.96% / test_loss: 0.9941, test_acc: 90.78%
Epoch 79 - loss: 0.9912, acc: 91.12% / test_loss: 0.9898, test_acc: 91.21%
Epoch 80 - loss: 0.9878, acc: 91.39% / test_loss: 0.9868, test_acc: 91.51%
Epoch 81 - loss: 0.9863, acc: 91.51% / test_loss: 0.9831, test_acc: 91.94%
Epoch 82 - loss: 0.9831, acc: 91.87% / test_loss: 0.9833, test_acc: 91.85%
Epoch 83 - loss: 0.9831, acc: 91.88% / test_loss: 0.9819, test_acc: 91.98%
Epoch 84 - loss: 0.9806, acc: 92.11% / test_loss: 0.9860, test_acc: 91.57%
Epoch 85 - loss: 0.9801, acc: 92.13% / test_loss: 0.9794, test_acc: 92.20%
Epoch 86 - loss: 0.9779, acc: 92.34% / test_loss: 0.9796, test_acc: 92.19%
Epoch 87 - loss: 0.9780, acc: 92.34% / test_loss: 0.9842, test_acc: 91.79%
Epoch 88 - loss: 0.9771, acc: 92.45% / test_loss: 0.9804, test_acc: 92.10%
Epoch 89 - loss: 0.9775, acc: 92.43% / test_loss: 0.9902, test_acc: 91.19%
Epoch 90 - loss: 0.9765, acc: 92.53% / test_loss: 0.9805, test_acc: 92.07%
Epoch 91 - loss: 0.9749, acc: 92.68% / test_loss: 0.9790, test_acc: 92.19%
Epoch 92 - loss: 0.9749, acc: 92.66% / test_loss: 0.9776, test_acc: 92.39%
Epoch 93 - loss: 0.9748, acc: 92.71% / test_loss: 0.9825, test_acc: 91.91%
Epoch 94 - loss: 0.9751, acc: 92.65% / test_loss: 0.9777, test_acc: 92.44%
Epoch 95 - loss: 0.9729, acc: 92.86% / test_loss: 0.9769, test_acc: 92.43%
Epoch 96 - loss: 0.9735, acc: 92.78% / test_loss: 0.9785, test_acc: 92.29%
Epoch 97 - loss: 0.9760, acc: 92.57% / test_loss: 0.9776, test_acc: 92.42%
Epoch 98 - loss: 0.9734, acc: 92.82% / test_loss: 0.9776, test_acc: 92.40%
Epoch 99 - loss: 0.9745, acc: 92.73% / test_loss: 0.9778, test_acc: 92.33%
Epoch 100 - loss: 0.9735, acc: 92.83% / test_loss: 0.9761, test_acc: 92.55%
Epoch 101 - loss: 0.9748, acc: 92.68% / test_loss: 0.9805, test_acc: 92.13%
Epoch 102 - loss: 0.9740, acc: 92.74% / test_loss: 0.9773, test_acc: 92.42%
Epoch 103 - loss: 0.9739, acc: 92.78% / test_loss: 0.9780, test_acc: 92.33%
Epoch 104 - loss: 0.9732, acc: 92.87% / test_loss: 0.9770, test_acc: 92.42%
Epoch 105 - loss: 0.9722, acc: 92.91% / test_loss: 0.9752, test_acc: 92.62%
Epoch 106 - loss: 0.9718, acc: 92.93% / test_loss: 0.9754, test_acc: 92.59%
Epoch 107 - loss: 0.9732, acc: 92.89% / test_loss: 0.9778, test_acc: 92.46%
Epoch 108 - loss: 0.9721, acc: 92.91% / test_loss: 0.9759, test_acc: 92.49%
Epoch 109 - loss: 0.9717, acc: 92.99% / test_loss: 0.9770, test_acc: 92.44%
Epoch 110 - loss: 0.9735, acc: 92.82% / test_loss: 0.9766, test_acc: 92.46%
Epoch 111 - loss: 0.9715, acc: 92.99% / test_loss: 0.9750, test_acc: 92.67%
Epoch 112 - loss: 0.9732, acc: 92.86% / test_loss: 0.9755, test_acc: 92.64%
Epoch 113 - loss: 0.9725, acc: 92.89% / test_loss: 0.9744, test_acc: 92.72%
Epoch 114 - loss: 0.9704, acc: 93.10% / test_loss: 0.9733, test_acc: 92.81%
Epoch 115 - loss: 0.9708, acc: 93.08% / test_loss: 0.9748, test_acc: 92.65%
Epoch 116 - loss: 0.9690, acc: 93.25% / test_loss: 0.9734, test_acc: 92.72%
Epoch 117 - loss: 0.9695, acc: 93.19% / test_loss: 0.9730, test_acc: 92.79%
Epoch 118 - loss: 0.9708, acc: 93.06% / test_loss: 0.9722, test_acc: 92.93%
Epoch 119 - loss: 0.9711, acc: 93.02% / test_loss: 0.9815, test_acc: 92.07%
Epoch 120 - loss: 0.9706, acc: 93.08% / test_loss: 0.9738, test_acc: 92.77%
Epoch 121 - loss: 0.9690, acc: 93.27% / test_loss: 0.9726, test_acc: 92.85%
Epoch 122 - loss: 0.9688, acc: 93.24% / test_loss: 0.9736, test_acc: 92.77%
Epoch 123 - loss: 0.9685, acc: 93.29% / test_loss: 0.9726, test_acc: 92.80%
Epoch 124 - loss: 0.9703, acc: 93.11% / test_loss: 0.9731, test_acc: 92.80%
Epoch 125 - loss: 0.9686, acc: 93.30% / test_loss: 0.9727, test_acc: 92.88%
Epoch 126 - loss: 0.9692, acc: 93.21% / test_loss: 0.9718, test_acc: 92.95%
Epoch 127 - loss: 0.9677, acc: 93.34% / test_loss: 0.9776, test_acc: 92.32%
Epoch 128 - loss: 0.9682, acc: 93.31% / test_loss: 0.9723, test_acc: 92.88%
Epoch 129 - loss: 0.9697, acc: 93.19% / test_loss: 0.9743, test_acc: 92.69%
Epoch 130 - loss: 0.9687, acc: 93.27% / test_loss: 0.9714, test_acc: 92.97%
Epoch 131 - loss: 0.9677, acc: 93.34% / test_loss: 0.9717, test_acc: 92.97%
Epoch 132 - loss: 0.9680, acc: 93.36% / test_loss: 0.9728, test_acc: 92.87%
Epoch 133 - loss: 0.9679, acc: 93.34% / test_loss: 0.9725, test_acc: 92.84%
Epoch 134 - loss: 0.9685, acc: 93.32% / test_loss: 0.9742, test_acc: 92.70%
Epoch 135 - loss: 0.9672, acc: 93.44% / test_loss: 0.9727, test_acc: 92.87%
Epoch 136 - loss: 0.9677, acc: 93.39% / test_loss: 0.9726, test_acc: 92.91%
Epoch 137 - loss: 0.9672, acc: 93.42% / test_loss: 0.9720, test_acc: 92.90%
Epoch 138 - loss: 0.9667, acc: 93.45% / test_loss: 0.9746, test_acc: 92.68%
Epoch 139 - loss: 0.9670, acc: 93.41% / test_loss: 0.9720, test_acc: 92.92%
Epoch 140 - loss: 0.9680, acc: 93.32% / test_loss: 0.9723, test_acc: 92.92%
Epoch 141 - loss: 0.9669, acc: 93.42% / test_loss: 0.9723, test_acc: 92.87%
Epoch 142 - loss: 0.9671, acc: 93.40% / test_loss: 0.9702, test_acc: 93.09%
Epoch 143 - loss: 0.9674, acc: 93.39% / test_loss: 0.9717, test_acc: 92.98%
Epoch 144 - loss: 0.9666, acc: 93.48% / test_loss: 0.9710, test_acc: 93.01%
Epoch 145 - loss: 0.9657, acc: 93.55% / test_loss: 0.9711, test_acc: 93.01%
Epoch 146 - loss: 0.9658, acc: 93.54% / test_loss: 0.9729, test_acc: 92.80%
Epoch 147 - loss: 0.9662, acc: 93.53% / test_loss: 0.9720, test_acc: 92.91%
Epoch 148 - loss: 0.9672, acc: 93.40% / test_loss: 0.9710, test_acc: 93.02%
Epoch 149 - loss: 0.9672, acc: 93.43% / test_loss: 0.9732, test_acc: 92.79%
Epoch 150 - loss: 0.9661, acc: 93.51% / test_loss: 0.9702, test_acc: 93.08%
Epoch 151 - loss: 0.9663, acc: 93.49% / test_loss: 0.9708, test_acc: 93.05%
Epoch 152 - loss: 0.9662, acc: 93.49% / test_loss: 0.9703, test_acc: 93.06%
Epoch 153 - loss: 0.9665, acc: 93.48% / test_loss: 0.9705, test_acc: 93.07%
Epoch 154 - loss: 0.9657, acc: 93.54% / test_loss: 0.9710, test_acc: 92.99%
Epoch 155 - loss: 0.9657, acc: 93.56% / test_loss: 0.9734, test_acc: 92.74%
Epoch 156 - loss: 0.9669, acc: 93.46% / test_loss: 0.9702, test_acc: 93.11%
Epoch 157 - loss: 0.9661, acc: 93.51% / test_loss: 0.9709, test_acc: 93.01%
Epoch 158 - loss: 0.9652, acc: 93.59% / test_loss: 0.9699, test_acc: 93.10%
Epoch 159 - loss: 0.9663, acc: 93.50% / test_loss: 0.9702, test_acc: 93.11%
Epoch 160 - loss: 0.9659, acc: 93.52% / test_loss: 0.9706, test_acc: 93.05%
Epoch 161 - loss: 0.9658, acc: 93.51% / test_loss: 0.9698, test_acc: 93.14%
Epoch 162 - loss: 0.9649, acc: 93.61% / test_loss: 0.9761, test_acc: 92.59%
Epoch 163 - loss: 0.9673, acc: 93.42% / test_loss: 0.9702, test_acc: 93.08%
Epoch 164 - loss: 0.9660, acc: 93.50% / test_loss: 0.9701, test_acc: 93.07%
Epoch 165 - loss: 0.9653, acc: 93.56% / test_loss: 0.9707, test_acc: 93.05%
Epoch 166 - loss: 0.9662, acc: 93.48% / test_loss: 0.9705, test_acc: 93.08%
Epoch 167 - loss: 0.9659, acc: 93.52% / test_loss: 0.9698, test_acc: 93.15%
Epoch 168 - loss: 0.9645, acc: 93.66% / test_loss: 0.9701, test_acc: 93.08%
Epoch 169 - loss: 0.9647, acc: 93.64% / test_loss: 0.9709, test_acc: 93.01%
Epoch 170 - loss: 0.9644, acc: 93.67% / test_loss: 0.9702, test_acc: 93.05%
Epoch 171 - loss: 0.9651, acc: 93.58% / test_loss: 0.9697, test_acc: 93.14%
Epoch 172 - loss: 0.9657, acc: 93.53% / test_loss: 0.9696, test_acc: 93.12%
Epoch 173 - loss: 0.9655, acc: 93.56% / test_loss: 0.9731, test_acc: 92.81%
Epoch 174 - loss: 0.9666, acc: 93.47% / test_loss: 0.9704, test_acc: 93.03%
Epoch 175 - loss: 0.9653, acc: 93.60% / test_loss: 0.9715, test_acc: 92.98%
Epoch 176 - loss: 0.9658, acc: 93.51% / test_loss: 0.9693, test_acc: 93.16%
Epoch 177 - loss: 0.9650, acc: 93.61% / test_loss: 0.9721, test_acc: 92.90%
Epoch 178 - loss: 0.9645, acc: 93.67% / test_loss: 0.9704, test_acc: 93.08%
Epoch 179 - loss: 0.9645, acc: 93.67% / test_loss: 0.9698, test_acc: 93.14%
Epoch 180 - loss: 0.9652, acc: 93.59% / test_loss: 0.9712, test_acc: 93.02%
Epoch 181 - loss: 0.9659, acc: 93.55% / test_loss: 0.9692, test_acc: 93.20%
Epoch 182 - loss: 0.9650, acc: 93.60% / test_loss: 0.9684, test_acc: 93.26%
Epoch 183 - loss: 0.9650, acc: 93.61% / test_loss: 0.9715, test_acc: 92.97%
Epoch 184 - loss: 0.9642, acc: 93.65% / test_loss: 0.9690, test_acc: 93.21%
Epoch 185 - loss: 0.9639, acc: 93.70% / test_loss: 0.9688, test_acc: 93.24%
Epoch 186 - loss: 0.9643, acc: 93.68% / test_loss: 0.9699, test_acc: 93.14%
Epoch 187 - loss: 0.9663, acc: 93.46% / test_loss: 0.9697, test_acc: 93.13%
Epoch 188 - loss: 0.9649, acc: 93.64% / test_loss: 0.9698, test_acc: 93.11%
Epoch 189 - loss: 0.9647, acc: 93.63% / test_loss: 0.9691, test_acc: 93.21%
Epoch 190 - loss: 0.9640, acc: 93.73% / test_loss: 0.9681, test_acc: 93.31%
Epoch 191 - loss: 0.9643, acc: 93.65% / test_loss: 0.9686, test_acc: 93.24%
Epoch 192 - loss: 0.9651, acc: 93.57% / test_loss: 0.9706, test_acc: 93.06%
Epoch 193 - loss: 0.9660, acc: 93.50% / test_loss: 0.9756, test_acc: 92.53%
Epoch 194 - loss: 0.9646, acc: 93.66% / test_loss: 0.9692, test_acc: 93.19%
Epoch 195 - loss: 0.9650, acc: 93.61% / test_loss: 0.9687, test_acc: 93.23%
Epoch 196 - loss: 0.9639, acc: 93.73% / test_loss: 0.9727, test_acc: 92.89%
Epoch 197 - loss: 0.9626, acc: 93.83% / test_loss: 0.9679, test_acc: 93.33%
Epoch 198 - loss: 0.9632, acc: 93.76% / test_loss: 0.9684, test_acc: 93.27%
Epoch 199 - loss: 0.9637, acc: 93.73% / test_loss: 0.9702, test_acc: 93.10%
Epoch 200 - loss: 0.9642, acc: 93.67% / test_loss: 0.9695, test_acc: 93.14%
Epoch 201 - loss: 0.9641, acc: 93.70% / test_loss: 0.9691, test_acc: 93.17%
Epoch 202 - loss: 0.9645, acc: 93.67% / test_loss: 0.9756, test_acc: 92.50%
Epoch 203 - loss: 0.9642, acc: 93.67% / test_loss: 0.9721, test_acc: 92.87%
Epoch 204 - loss: 0.9661, acc: 93.46% / test_loss: 0.9682, test_acc: 93.27%
Epoch 205 - loss: 0.9651, acc: 93.60% / test_loss: 0.9695, test_acc: 93.17%
Epoch 206 - loss: 0.9639, acc: 93.69% / test_loss: 0.9708, test_acc: 92.99%
Epoch 207 - loss: 0.9640, acc: 93.67% / test_loss: 0.9682, test_acc: 93.29%
Epoch 208 - loss: 0.9635, acc: 93.75% / test_loss: 0.9674, test_acc: 93.36%
Epoch 209 - loss: 0.9632, acc: 93.76% / test_loss: 0.9678, test_acc: 93.35%
Epoch 210 - loss: 0.9637, acc: 93.74% / test_loss: 0.9718, test_acc: 92.90%
Epoch 211 - loss: 0.9643, acc: 93.67% / test_loss: 0.9691, test_acc: 93.19%
Epoch 212 - loss: 0.9626, acc: 93.82% / test_loss: 0.9735, test_acc: 92.80%
Epoch 213 - loss: 0.9638, acc: 93.73% / test_loss: 0.9679, test_acc: 93.32%
Epoch 214 - loss: 0.9630, acc: 93.80% / test_loss: 0.9712, test_acc: 92.96%
Epoch 215 - loss: 0.9629, acc: 93.83% / test_loss: 0.9692, test_acc: 93.17%
Epoch 216 - loss: 0.9627, acc: 93.82% / test_loss: 0.9681, test_acc: 93.30%
Epoch 217 - loss: 0.9636, acc: 93.76% / test_loss: 0.9687, test_acc: 93.23%
Epoch 218 - loss: 0.9639, acc: 93.70% / test_loss: 0.9701, test_acc: 93.07%
Epoch 219 - loss: 0.9633, acc: 93.76% / test_loss: 0.9695, test_acc: 93.14%
Epoch 220 - loss: 0.9626, acc: 93.84% / test_loss: 0.9712, test_acc: 92.97%
Epoch 221 - loss: 0.9622, acc: 93.86% / test_loss: 0.9693, test_acc: 93.15%
Epoch 222 - loss: 0.9633, acc: 93.75% / test_loss: 0.9689, test_acc: 93.21%
Epoch 223 - loss: 0.9640, acc: 93.73% / test_loss: 0.9680, test_acc: 93.31%
Epoch 224 - loss: 0.9624, acc: 93.83% / test_loss: 0.9681, test_acc: 93.27%
Epoch 225 - loss: 0.9631, acc: 93.78% / test_loss: 0.9692, test_acc: 93.24%
Epoch 226 - loss: 0.9642, acc: 93.66% / test_loss: 0.9689, test_acc: 93.20%
Epoch 227 - loss: 0.9645, acc: 93.67% / test_loss: 0.9707, test_acc: 93.02%
Epoch 228 - loss: 0.9638, acc: 93.74% / test_loss: 0.9704, test_acc: 93.05%
Epoch 229 - loss: 0.9634, acc: 93.74% / test_loss: 0.9673, test_acc: 93.39%
Epoch 230 - loss: 0.9634, acc: 93.74% / test_loss: 0.9700, test_acc: 93.08%
Epoch 231 - loss: 0.9637, acc: 93.73% / test_loss: 0.9675, test_acc: 93.35%
Epoch 232 - loss: 0.9629, acc: 93.79% / test_loss: 0.9694, test_acc: 93.16%
Epoch 233 - loss: 0.9631, acc: 93.77% / test_loss: 0.9713, test_acc: 92.93%
Epoch 234 - loss: 0.9625, acc: 93.84% / test_loss: 0.9676, test_acc: 93.32%
Epoch 235 - loss: 0.9618, acc: 93.91% / test_loss: 0.9671, test_acc: 93.38%
Epoch 236 - loss: 0.9621, acc: 93.87% / test_loss: 0.9688, test_acc: 93.20%
Epoch 237 - loss: 0.9635, acc: 93.76% / test_loss: 0.9689, test_acc: 93.20%
Epoch 238 - loss: 0.9623, acc: 93.87% / test_loss: 0.9711, test_acc: 92.97%
Epoch 239 - loss: 0.9654, acc: 93.54% / test_loss: 0.9707, test_acc: 93.03%
Epoch 240 - loss: 0.9649, acc: 93.61% / test_loss: 0.9693, test_acc: 93.18%
Epoch 241 - loss: 0.9649, acc: 93.63% / test_loss: 0.9701, test_acc: 93.11%
Epoch 242 - loss: 0.9624, acc: 93.87% / test_loss: 0.9683, test_acc: 93.26%
Epoch 243 - loss: 0.9623, acc: 93.86% / test_loss: 0.9676, test_acc: 93.33%
Epoch 244 - loss: 0.9621, acc: 93.87% / test_loss: 0.9678, test_acc: 93.36%
Epoch 245 - loss: 0.9648, acc: 93.64% / test_loss: 0.9689, test_acc: 93.24%
Epoch 246 - loss: 0.9623, acc: 93.84% / test_loss: 0.9677, test_acc: 93.34%
Epoch 247 - loss: 0.9620, acc: 93.88% / test_loss: 0.9688, test_acc: 93.24%
Epoch 248 - loss: 0.9624, acc: 93.84% / test_loss: 0.9682, test_acc: 93.25%
Epoch 249 - loss: 0.9621, acc: 93.88% / test_loss: 0.9673, test_acc: 93.36%
Epoch 250 - loss: 0.9626, acc: 93.82% / test_loss: 0.9708, test_acc: 93.08%
Epoch 251 - loss: 0.9644, acc: 93.66% / test_loss: 0.9707, test_acc: 93.01%
Epoch 252 - loss: 0.9631, acc: 93.77% / test_loss: 0.9678, test_acc: 93.33%
Epoch 253 - loss: 0.9619, acc: 93.90% / test_loss: 0.9674, test_acc: 93.33%
Epoch 254 - loss: 0.9631, acc: 93.76% / test_loss: 0.9676, test_acc: 93.32%
Epoch 255 - loss: 0.9634, acc: 93.75% / test_loss: 0.9678, test_acc: 93.30%
Epoch 256 - loss: 0.9645, acc: 93.63% / test_loss: 0.9695, test_acc: 93.16%
Epoch 257 - loss: 0.9631, acc: 93.79% / test_loss: 0.9678, test_acc: 93.30%
Epoch 258 - loss: 0.9619, acc: 93.89% / test_loss: 0.9690, test_acc: 93.20%
Epoch 259 - loss: 0.9623, acc: 93.87% / test_loss: 0.9676, test_acc: 93.31%
Epoch 260 - loss: 0.9622, acc: 93.88% / test_loss: 0.9646, test_acc: 93.66%
Epoch 261 - loss: 0.9597, acc: 94.13% / test_loss: 0.9644, test_acc: 93.70%
Epoch 262 - loss: 0.9610, acc: 94.01% / test_loss: 0.9659, test_acc: 93.51%
Epoch 263 - loss: 0.9582, acc: 94.32% / test_loss: 0.9645, test_acc: 93.65%
Epoch 264 - loss: 0.9272, acc: 97.69% / test_loss: 0.9245, test_acc: 98.03%
Epoch 265 - loss: 0.9168, acc: 98.83% / test_loss: 0.9238, test_acc: 98.12%
Epoch 266 - loss: 0.9145, acc: 99.09% / test_loss: 0.9220, test_acc: 98.27%
Epoch 267 - loss: 0.9144, acc: 99.06% / test_loss: 0.9231, test_acc: 98.17%
Epoch 268 - loss: 0.9137, acc: 99.12% / test_loss: 0.9233, test_acc: 98.16%
Epoch 269 - loss: 0.9156, acc: 98.93% / test_loss: 0.9298, test_acc: 97.47%
Epoch 270 - loss: 0.9162, acc: 98.88% / test_loss: 0.9232, test_acc: 98.17%
Epoch 271 - loss: 0.9160, acc: 98.88% / test_loss: 0.9286, test_acc: 97.58%
Epoch 272 - loss: 0.9159, acc: 98.91% / test_loss: 0.9232, test_acc: 98.15%
Epoch 273 - loss: 0.9147, acc: 99.01% / test_loss: 0.9218, test_acc: 98.31%
Epoch 274 - loss: 0.9158, acc: 98.91% / test_loss: 0.9242, test_acc: 98.07%
Epoch 275 - loss: 0.9144, acc: 99.05% / test_loss: 0.9227, test_acc: 98.20%
Epoch 276 - loss: 0.9155, acc: 98.94% / test_loss: 0.9230, test_acc: 98.23%
Epoch 277 - loss: 0.9150, acc: 99.00% / test_loss: 0.9229, test_acc: 98.20%
Epoch 278 - loss: 0.9145, acc: 99.03% / test_loss: 0.9228, test_acc: 98.22%
Epoch 279 - loss: 0.9146, acc: 99.03% / test_loss: 0.9246, test_acc: 98.01%
Epoch 280 - loss: 0.9169, acc: 98.80% / test_loss: 0.9258, test_acc: 97.89%
Epoch 281 - loss: 0.9151, acc: 98.98% / test_loss: 0.9262, test_acc: 97.83%
Epoch 282 - loss: 0.9150, acc: 99.01% / test_loss: 0.9222, test_acc: 98.28%
Epoch 283 - loss: 0.9143, acc: 99.06% / test_loss: 0.9220, test_acc: 98.26%
Epoch 284 - loss: 0.9137, acc: 99.12% / test_loss: 0.9221, test_acc: 98.28%
Epoch 285 - loss: 0.9149, acc: 99.00% / test_loss: 0.9241, test_acc: 98.08%
Epoch 286 - loss: 0.9157, acc: 98.92% / test_loss: 0.9231, test_acc: 98.14%
Epoch 287 - loss: 0.9153, acc: 98.94% / test_loss: 0.9230, test_acc: 98.18%
Epoch 288 - loss: 0.9154, acc: 98.94% / test_loss: 0.9239, test_acc: 98.10%
Epoch 289 - loss: 0.9148, acc: 99.01% / test_loss: 0.9233, test_acc: 98.14%
Epoch 290 - loss: 0.9141, acc: 99.08% / test_loss: 0.9235, test_acc: 98.14%
Epoch 291 - loss: 0.9141, acc: 99.08% / test_loss: 0.9235, test_acc: 98.11%
Epoch 292 - loss: 0.9145, acc: 99.04% / test_loss: 0.9224, test_acc: 98.23%
Epoch 293 - loss: 0.9150, acc: 98.99% / test_loss: 0.9226, test_acc: 98.23%
Epoch 294 - loss: 0.9157, acc: 98.89% / test_loss: 0.9229, test_acc: 98.17%
Epoch 295 - loss: 0.9141, acc: 99.09% / test_loss: 0.9234, test_acc: 98.11%
Epoch 296 - loss: 0.9141, acc: 99.08% / test_loss: 0.9221, test_acc: 98.24%
Epoch 297 - loss: 0.9144, acc: 99.06% / test_loss: 0.9237, test_acc: 98.10%
Epoch 298 - loss: 0.9148, acc: 99.01% / test_loss: 0.9224, test_acc: 98.24%
Epoch 299 - loss: 0.9158, acc: 98.91% / test_loss: 0.9221, test_acc: 98.27%
Epoch 300 - loss: 0.9147, acc: 99.02% / test_loss: 0.9232, test_acc: 98.18%
Epoch 301 - loss: 0.9139, acc: 99.09% / test_loss: 0.9219, test_acc: 98.29%
Epoch 302 - loss: 0.9134, acc: 99.15% / test_loss: 0.9222, test_acc: 98.27%
Epoch 303 - loss: 0.9141, acc: 99.08% / test_loss: 0.9222, test_acc: 98.27%
Epoch 304 - loss: 0.9155, acc: 98.91% / test_loss: 0.9248, test_acc: 98.00%
Epoch 305 - loss: 0.9156, acc: 98.93% / test_loss: 0.9247, test_acc: 97.99%
Epoch 306 - loss: 0.9137, acc: 99.11% / test_loss: 0.9238, test_acc: 98.11%
Epoch 307 - loss: 0.9151, acc: 98.96% / test_loss: 0.9234, test_acc: 98.15%
Epoch 308 - loss: 0.9145, acc: 99.05% / test_loss: 0.9243, test_acc: 98.06%
Epoch 309 - loss: 0.9150, acc: 99.00% / test_loss: 0.9234, test_acc: 98.11%
Epoch 310 - loss: 0.9134, acc: 99.14% / test_loss: 0.9232, test_acc: 98.17%
Epoch 311 - loss: 0.9140, acc: 99.08% / test_loss: 0.9251, test_acc: 97.94%
Epoch 312 - loss: 0.9150, acc: 99.00% / test_loss: 0.9252, test_acc: 97.95%
Epoch 313 - loss: 0.9141, acc: 99.08% / test_loss: 0.9246, test_acc: 98.03%
Epoch 314 - loss: 0.9140, acc: 99.09% / test_loss: 0.9241, test_acc: 98.05%
Epoch 315 - loss: 0.9134, acc: 99.14% / test_loss: 0.9230, test_acc: 98.18%
Epoch 316 - loss: 0.9141, acc: 99.09% / test_loss: 0.9225, test_acc: 98.21%
Epoch 317 - loss: 0.9140, acc: 99.08% / test_loss: 0.9254, test_acc: 97.95%
Epoch 318 - loss: 0.9152, acc: 98.96% / test_loss: 0.9233, test_acc: 98.14%
Epoch 319 - loss: 0.9146, acc: 99.02% / test_loss: 0.9239, test_acc: 98.10%
Epoch 320 - loss: 0.9149, acc: 99.00% / test_loss: 0.9235, test_acc: 98.18%
Epoch 321 - loss: 0.9149, acc: 99.02% / test_loss: 0.9233, test_acc: 98.16%
Epoch 322 - loss: 0.9137, acc: 99.10% / test_loss: 0.9219, test_acc: 98.29%
Epoch 323 - loss: 0.9140, acc: 99.07% / test_loss: 0.9228, test_acc: 98.20%
Epoch 324 - loss: 0.9142, acc: 99.07% / test_loss: 0.9236, test_acc: 98.14%
Epoch 325 - loss: 0.9140, acc: 99.09% / test_loss: 0.9221, test_acc: 98.29%
Epoch 326 - loss: 0.9150, acc: 98.97% / test_loss: 0.9250, test_acc: 98.00%
Epoch 327 - loss: 0.9139, acc: 99.11% / test_loss: 0.9223, test_acc: 98.25%
Epoch 328 - loss: 0.9134, acc: 99.14% / test_loss: 0.9220, test_acc: 98.27%
Epoch 329 - loss: 0.9136, acc: 99.14% / test_loss: 0.9234, test_acc: 98.16%
Epoch 330 - loss: 0.9145, acc: 99.05% / test_loss: 0.9228, test_acc: 98.19%
Epoch 331 - loss: 0.9136, acc: 99.14% / test_loss: 0.9223, test_acc: 98.26%
Epoch 332 - loss: 0.9140, acc: 99.09% / test_loss: 0.9243, test_acc: 98.03%
Epoch 333 - loss: 0.9148, acc: 99.03% / test_loss: 0.9235, test_acc: 98.10%
Epoch 334 - loss: 0.9138, acc: 99.09% / test_loss: 0.9235, test_acc: 98.14%
Epoch 335 - loss: 0.9144, acc: 99.03% / test_loss: 0.9231, test_acc: 98.17%
Epoch 336 - loss: 0.9140, acc: 99.09% / test_loss: 0.9226, test_acc: 98.23%
Epoch 337 - loss: 0.9142, acc: 99.07% / test_loss: 0.9235, test_acc: 98.14%
Epoch 338 - loss: 0.9146, acc: 99.03% / test_loss: 0.9233, test_acc: 98.16%
Epoch 339 - loss: 0.9143, acc: 99.06% / test_loss: 0.9283, test_acc: 97.65%
Epoch 340 - loss: 0.9159, acc: 98.88% / test_loss: 0.9232, test_acc: 98.14%
Epoch 341 - loss: 0.9145, acc: 99.03% / test_loss: 0.9238, test_acc: 98.09%
Epoch 342 - loss: 0.9148, acc: 99.00% / test_loss: 0.9240, test_acc: 98.04%
Epoch 343 - loss: 0.9140, acc: 99.08% / test_loss: 0.9221, test_acc: 98.25%
Epoch 344 - loss: 0.9132, acc: 99.16% / test_loss: 0.9223, test_acc: 98.26%
Epoch 345 - loss: 0.9171, acc: 98.75% / test_loss: 0.9236, test_acc: 98.12%
Epoch 346 - loss: 0.9143, acc: 99.05% / test_loss: 0.9251, test_acc: 97.98%
Epoch 347 - loss: 0.9154, acc: 98.96% / test_loss: 0.9243, test_acc: 98.01%
Epoch 348 - loss: 0.9141, acc: 99.08% / test_loss: 0.9226, test_acc: 98.20%
Epoch 349 - loss: 0.9146, acc: 99.02% / test_loss: 0.9225, test_acc: 98.22%
Epoch 350 - loss: 0.9149, acc: 98.99% / test_loss: 0.9223, test_acc: 98.29%
Epoch 351 - loss: 0.9137, acc: 99.10% / test_loss: 0.9232, test_acc: 98.14%
Epoch 352 - loss: 0.9140, acc: 99.09% / test_loss: 0.9233, test_acc: 98.14%
Epoch 353 - loss: 0.9131, acc: 99.18% / test_loss: 0.9230, test_acc: 98.15%
Epoch 354 - loss: 0.9132, acc: 99.16% / test_loss: 0.9224, test_acc: 98.26%
Epoch 355 - loss: 0.9147, acc: 99.03% / test_loss: 0.9245, test_acc: 97.99%
Epoch 356 - loss: 0.9157, acc: 98.91% / test_loss: 0.9228, test_acc: 98.20%
Epoch 357 - loss: 0.9138, acc: 99.10% / test_loss: 0.9239, test_acc: 98.07%
Epoch 358 - loss: 0.9148, acc: 99.00% / test_loss: 0.9262, test_acc: 97.83%
Epoch 359 - loss: 0.9143, acc: 99.06% / test_loss: 0.9229, test_acc: 98.20%
Epoch 360 - loss: 0.9143, acc: 99.06% / test_loss: 0.9233, test_acc: 98.14%
Epoch 361 - loss: 0.9136, acc: 99.12% / test_loss: 0.9222, test_acc: 98.27%
Epoch 362 - loss: 0.9132, acc: 99.16% / test_loss: 0.9222, test_acc: 98.24%
Epoch 363 - loss: 0.9131, acc: 99.18% / test_loss: 0.9221, test_acc: 98.27%
Epoch 364 - loss: 0.9131, acc: 99.17% / test_loss: 0.9226, test_acc: 98.22%
Epoch 365 - loss: 0.9143, acc: 99.06% / test_loss: 0.9239, test_acc: 98.10%
Epoch 366 - loss: 0.9158, acc: 98.91% / test_loss: 0.9248, test_acc: 98.03%
Epoch 367 - loss: 0.9141, acc: 99.07% / test_loss: 0.9238, test_acc: 98.10%
Epoch 368 - loss: 0.9142, acc: 99.08% / test_loss: 0.9232, test_acc: 98.17%
Epoch 369 - loss: 0.9137, acc: 99.11% / test_loss: 0.9233, test_acc: 98.15%
Epoch 370 - loss: 0.9135, acc: 99.13% / test_loss: 0.9246, test_acc: 97.98%
Epoch 371 - loss: 0.9145, acc: 99.02% / test_loss: 0.9264, test_acc: 97.80%
Epoch 372 - loss: 0.9142, acc: 99.09% / test_loss: 0.9231, test_acc: 98.15%
Epoch 373 - loss: 0.9133, acc: 99.15% / test_loss: 0.9227, test_acc: 98.21%
Epoch 374 - loss: 0.9140, acc: 99.10% / test_loss: 0.9242, test_acc: 98.09%
Epoch 375 - loss: 0.9139, acc: 99.12% / test_loss: 0.9236, test_acc: 98.10%
Epoch 376 - loss: 0.9133, acc: 99.15% / test_loss: 0.9236, test_acc: 98.12%
Epoch 377 - loss: 0.9139, acc: 99.09% / test_loss: 0.9223, test_acc: 98.24%
Epoch 378 - loss: 0.9130, acc: 99.18% / test_loss: 0.9219, test_acc: 98.29%
Epoch 379 - loss: 0.9130, acc: 99.19% / test_loss: 0.9240, test_acc: 98.07%
Epoch 380 - loss: 0.9153, acc: 98.94% / test_loss: 0.9270, test_acc: 97.76%
Epoch 381 - loss: 0.9145, acc: 99.03% / test_loss: 0.9230, test_acc: 98.17%
Epoch 382 - loss: 0.9140, acc: 99.08% / test_loss: 0.9229, test_acc: 98.18%
Epoch 383 - loss: 0.9138, acc: 99.10% / test_loss: 0.9235, test_acc: 98.12%
Epoch 384 - loss: 0.9146, acc: 99.00% / test_loss: 0.9251, test_acc: 97.98%
Epoch 385 - loss: 0.9149, acc: 99.00% / test_loss: 0.9232, test_acc: 98.19%
Epoch 386 - loss: 0.9145, acc: 99.05% / test_loss: 0.9239, test_acc: 98.05%
Epoch 387 - loss: 0.9145, acc: 99.03% / test_loss: 0.9220, test_acc: 98.29%
Epoch 388 - loss: 0.9140, acc: 99.09% / test_loss: 0.9223, test_acc: 98.24%
Epoch 389 - loss: 0.9150, acc: 98.98% / test_loss: 0.9235, test_acc: 98.13%
Epoch 390 - loss: 0.9133, acc: 99.16% / test_loss: 0.9219, test_acc: 98.26%
Epoch 391 - loss: 0.9169, acc: 98.80% / test_loss: 0.9233, test_acc: 98.12%
Epoch 392 - loss: 0.9142, acc: 99.06% / test_loss: 0.9229, test_acc: 98.17%
Epoch 393 - loss: 0.9140, acc: 99.07% / test_loss: 0.9230, test_acc: 98.17%
Epoch 394 - loss: 0.9150, acc: 98.97% / test_loss: 0.9277, test_acc: 97.70%
Epoch 395 - loss: 0.9139, acc: 99.09% / test_loss: 0.9228, test_acc: 98.20%
Epoch 396 - loss: 0.9136, acc: 99.12% / test_loss: 0.9237, test_acc: 98.12%
Epoch 397 - loss: 0.9147, acc: 99.00% / test_loss: 0.9221, test_acc: 98.29%
Epoch 398 - loss: 0.9144, acc: 99.04% / test_loss: 0.9218, test_acc: 98.30%
Epoch 399 - loss: 0.9131, acc: 99.16% / test_loss: 0.9213, test_acc: 98.34%
Epoch 400 - loss: 0.9136, acc: 99.12% / test_loss: 0.9213, test_acc: 98.37%
Best test accuracy 98.37% in epoch 400.
----------------------------------------------------------------------------------------------------
Run 7
Epoch 1 - loss: 1.3347, acc: 57.78% / test_loss: 1.2064, test_acc: 71.40%
Epoch 2 - loss: 1.1408, acc: 77.58% / test_loss: 1.0692, test_acc: 84.88%
Epoch 3 - loss: 1.0620, acc: 85.03% / test_loss: 1.0402, test_acc: 87.04%
Epoch 4 - loss: 1.0488, acc: 86.02% / test_loss: 1.0357, test_acc: 87.26%
Epoch 5 - loss: 1.0401, acc: 86.75% / test_loss: 1.0257, test_acc: 88.06%
Epoch 6 - loss: 1.0343, acc: 87.23% / test_loss: 1.0248, test_acc: 88.04%
Epoch 7 - loss: 1.0306, acc: 87.49% / test_loss: 1.0203, test_acc: 88.47%
Epoch 8 - loss: 1.0274, acc: 87.78% / test_loss: 1.0186, test_acc: 88.73%
Epoch 9 - loss: 1.0257, acc: 87.98% / test_loss: 1.0171, test_acc: 88.74%
Epoch 10 - loss: 1.0249, acc: 88.00% / test_loss: 1.0170, test_acc: 88.76%
Epoch 11 - loss: 1.0213, acc: 88.37% / test_loss: 1.0182, test_acc: 88.69%
Epoch 12 - loss: 1.0216, acc: 88.24% / test_loss: 1.0140, test_acc: 89.17%
Epoch 13 - loss: 1.0200, acc: 88.44% / test_loss: 1.0121, test_acc: 89.24%
Epoch 14 - loss: 1.0178, acc: 88.60% / test_loss: 1.0122, test_acc: 89.23%
Epoch 15 - loss: 1.0171, acc: 88.67% / test_loss: 1.0099, test_acc: 89.46%
Epoch 16 - loss: 1.0143, acc: 88.95% / test_loss: 1.0071, test_acc: 89.68%
Epoch 17 - loss: 1.0151, acc: 88.94% / test_loss: 1.0116, test_acc: 89.41%
Epoch 18 - loss: 1.0155, acc: 88.82% / test_loss: 1.0091, test_acc: 89.61%
Epoch 19 - loss: 1.0134, acc: 88.98% / test_loss: 1.0105, test_acc: 89.35%
Epoch 20 - loss: 1.0121, acc: 89.14% / test_loss: 1.0065, test_acc: 89.63%
Epoch 21 - loss: 1.0127, acc: 89.07% / test_loss: 1.0065, test_acc: 89.69%
Epoch 22 - loss: 1.0112, acc: 89.24% / test_loss: 1.0042, test_acc: 89.85%
Epoch 23 - loss: 1.0110, acc: 89.29% / test_loss: 1.0070, test_acc: 89.77%
Epoch 24 - loss: 1.0096, acc: 89.38% / test_loss: 1.0039, test_acc: 89.94%
Epoch 25 - loss: 1.0089, acc: 89.45% / test_loss: 1.0041, test_acc: 89.94%
Epoch 26 - loss: 1.0077, acc: 89.57% / test_loss: 1.0021, test_acc: 90.15%
Epoch 27 - loss: 1.0083, acc: 89.50% / test_loss: 1.0043, test_acc: 89.88%
Epoch 28 - loss: 1.0077, acc: 89.52% / test_loss: 1.0083, test_acc: 89.42%
Epoch 29 - loss: 1.0080, acc: 89.52% / test_loss: 1.0009, test_acc: 90.15%
Epoch 30 - loss: 1.0054, acc: 89.75% / test_loss: 1.0040, test_acc: 89.89%
Epoch 31 - loss: 1.0068, acc: 89.66% / test_loss: 1.0037, test_acc: 90.07%
Epoch 32 - loss: 1.0070, acc: 89.62% / test_loss: 1.0047, test_acc: 89.98%
Epoch 33 - loss: 1.0057, acc: 89.71% / test_loss: 1.0039, test_acc: 89.95%
Epoch 34 - loss: 1.0062, acc: 89.68% / test_loss: 1.0030, test_acc: 90.03%
Epoch 35 - loss: 1.0055, acc: 89.76% / test_loss: 1.0009, test_acc: 90.19%
Epoch 36 - loss: 1.0054, acc: 89.76% / test_loss: 1.0015, test_acc: 90.18%
Epoch 37 - loss: 1.0038, acc: 89.88% / test_loss: 1.0035, test_acc: 89.88%
Epoch 38 - loss: 1.0025, acc: 90.01% / test_loss: 1.0001, test_acc: 90.26%
Epoch 39 - loss: 1.0024, acc: 90.00% / test_loss: 1.0039, test_acc: 89.91%
Epoch 40 - loss: 1.0031, acc: 89.96% / test_loss: 0.9985, test_acc: 90.36%
Epoch 41 - loss: 1.0027, acc: 90.00% / test_loss: 0.9986, test_acc: 90.37%
Epoch 42 - loss: 1.0016, acc: 90.12% / test_loss: 1.0008, test_acc: 90.22%
Epoch 43 - loss: 1.0007, acc: 90.15% / test_loss: 1.0005, test_acc: 90.23%
Epoch 44 - loss: 1.0012, acc: 90.12% / test_loss: 0.9974, test_acc: 90.46%
Epoch 45 - loss: 1.0004, acc: 90.18% / test_loss: 0.9975, test_acc: 90.46%
Epoch 46 - loss: 1.0011, acc: 90.12% / test_loss: 0.9981, test_acc: 90.47%
Epoch 47 - loss: 1.0012, acc: 90.16% / test_loss: 1.0011, test_acc: 90.23%
Epoch 48 - loss: 1.0002, acc: 90.22% / test_loss: 0.9966, test_acc: 90.58%
Epoch 49 - loss: 0.9999, acc: 90.24% / test_loss: 0.9989, test_acc: 90.40%
Epoch 50 - loss: 1.0005, acc: 90.16% / test_loss: 0.9985, test_acc: 90.35%
Epoch 51 - loss: 1.0001, acc: 90.22% / test_loss: 0.9995, test_acc: 90.31%
Epoch 52 - loss: 1.0029, acc: 89.97% / test_loss: 0.9976, test_acc: 90.46%
Epoch 53 - loss: 0.9988, acc: 90.36% / test_loss: 0.9986, test_acc: 90.31%
Epoch 54 - loss: 0.9981, acc: 90.38% / test_loss: 0.9967, test_acc: 90.58%
Epoch 55 - loss: 0.9973, acc: 90.49% / test_loss: 0.9968, test_acc: 90.56%
Epoch 56 - loss: 0.9978, acc: 90.44% / test_loss: 0.9967, test_acc: 90.58%
Epoch 57 - loss: 0.9964, acc: 90.56% / test_loss: 0.9944, test_acc: 90.74%
Epoch 58 - loss: 0.9968, acc: 90.51% / test_loss: 0.9965, test_acc: 90.57%
Epoch 59 - loss: 0.9972, acc: 90.51% / test_loss: 0.9969, test_acc: 90.56%
Epoch 60 - loss: 0.9966, acc: 90.55% / test_loss: 0.9937, test_acc: 90.82%
Epoch 61 - loss: 0.9960, acc: 90.62% / test_loss: 0.9946, test_acc: 90.76%
Epoch 62 - loss: 0.9929, acc: 90.86% / test_loss: 0.9912, test_acc: 91.17%
Epoch 63 - loss: 0.9925, acc: 91.01% / test_loss: 1.0003, test_acc: 90.13%
Epoch 64 - loss: 0.9870, acc: 91.49% / test_loss: 0.9884, test_acc: 91.34%
Epoch 65 - loss: 0.9853, acc: 91.70% / test_loss: 0.9934, test_acc: 90.94%
Epoch 66 - loss: 0.9837, acc: 91.87% / test_loss: 0.9869, test_acc: 91.61%
Epoch 67 - loss: 0.9817, acc: 92.01% / test_loss: 0.9841, test_acc: 91.78%
Epoch 68 - loss: 0.9806, acc: 92.13% / test_loss: 0.9829, test_acc: 91.89%
Epoch 69 - loss: 0.9792, acc: 92.28% / test_loss: 0.9805, test_acc: 92.14%
Epoch 70 - loss: 0.9796, acc: 92.25% / test_loss: 0.9805, test_acc: 92.13%
Epoch 71 - loss: 0.9777, acc: 92.42% / test_loss: 0.9810, test_acc: 92.09%
Epoch 72 - loss: 0.9794, acc: 92.21% / test_loss: 0.9820, test_acc: 92.07%
Epoch 73 - loss: 0.9778, acc: 92.39% / test_loss: 0.9804, test_acc: 92.12%
Epoch 74 - loss: 0.9762, acc: 92.57% / test_loss: 0.9815, test_acc: 92.06%
Epoch 75 - loss: 0.9761, acc: 92.60% / test_loss: 0.9810, test_acc: 92.11%
Epoch 76 - loss: 0.9757, acc: 92.63% / test_loss: 0.9784, test_acc: 92.32%
Epoch 77 - loss: 0.9759, acc: 92.57% / test_loss: 0.9787, test_acc: 92.32%
Epoch 78 - loss: 0.9763, acc: 92.55% / test_loss: 0.9815, test_acc: 92.03%
Epoch 79 - loss: 0.9764, acc: 92.52% / test_loss: 0.9776, test_acc: 92.43%
Epoch 80 - loss: 0.9754, acc: 92.64% / test_loss: 0.9813, test_acc: 92.08%
Epoch 81 - loss: 0.9756, acc: 92.65% / test_loss: 0.9801, test_acc: 92.25%
Epoch 82 - loss: 0.9742, acc: 92.74% / test_loss: 0.9810, test_acc: 92.20%
Epoch 83 - loss: 0.9745, acc: 92.71% / test_loss: 0.9817, test_acc: 92.01%
Epoch 84 - loss: 0.9737, acc: 92.80% / test_loss: 0.9776, test_acc: 92.40%
Epoch 85 - loss: 0.9764, acc: 92.53% / test_loss: 0.9794, test_acc: 92.27%
Epoch 86 - loss: 0.9747, acc: 92.71% / test_loss: 0.9769, test_acc: 92.43%
Epoch 87 - loss: 0.9743, acc: 92.73% / test_loss: 0.9780, test_acc: 92.31%
Epoch 88 - loss: 0.9739, acc: 92.80% / test_loss: 0.9776, test_acc: 92.41%
Epoch 89 - loss: 0.9741, acc: 92.75% / test_loss: 0.9766, test_acc: 92.45%
Epoch 90 - loss: 0.9746, acc: 92.74% / test_loss: 0.9774, test_acc: 92.44%
Epoch 91 - loss: 0.9732, acc: 92.81% / test_loss: 0.9769, test_acc: 92.49%
Epoch 92 - loss: 0.9749, acc: 92.67% / test_loss: 0.9827, test_acc: 91.94%
Epoch 93 - loss: 0.9747, acc: 92.71% / test_loss: 0.9769, test_acc: 92.47%
Epoch 94 - loss: 0.9740, acc: 92.75% / test_loss: 0.9790, test_acc: 92.27%
Epoch 95 - loss: 0.9728, acc: 92.87% / test_loss: 0.9763, test_acc: 92.55%
Epoch 96 - loss: 0.9725, acc: 92.90% / test_loss: 0.9762, test_acc: 92.47%
Epoch 97 - loss: 0.9723, acc: 92.91% / test_loss: 0.9787, test_acc: 92.26%
Epoch 98 - loss: 0.9730, acc: 92.85% / test_loss: 0.9761, test_acc: 92.53%
Epoch 99 - loss: 0.9733, acc: 92.82% / test_loss: 0.9820, test_acc: 92.01%
Epoch 100 - loss: 0.9719, acc: 92.95% / test_loss: 0.9760, test_acc: 92.55%
Epoch 101 - loss: 0.9719, acc: 92.96% / test_loss: 0.9767, test_acc: 92.47%
Epoch 102 - loss: 0.9719, acc: 92.97% / test_loss: 0.9766, test_acc: 92.51%
Epoch 103 - loss: 0.9719, acc: 92.96% / test_loss: 0.9891, test_acc: 91.26%
Epoch 104 - loss: 0.9715, acc: 93.04% / test_loss: 0.9753, test_acc: 92.58%
Epoch 105 - loss: 0.9702, acc: 93.14% / test_loss: 0.9759, test_acc: 92.56%
Epoch 106 - loss: 0.9712, acc: 93.04% / test_loss: 0.9749, test_acc: 92.65%
Epoch 107 - loss: 0.9705, acc: 93.12% / test_loss: 0.9739, test_acc: 92.78%
Epoch 108 - loss: 0.9707, acc: 93.06% / test_loss: 0.9747, test_acc: 92.71%
Epoch 109 - loss: 0.9706, acc: 93.09% / test_loss: 0.9746, test_acc: 92.72%
Epoch 110 - loss: 0.9693, acc: 93.21% / test_loss: 0.9749, test_acc: 92.64%
Epoch 111 - loss: 0.9687, acc: 93.27% / test_loss: 0.9728, test_acc: 92.85%
Epoch 112 - loss: 0.9686, acc: 93.28% / test_loss: 0.9724, test_acc: 92.89%
Epoch 113 - loss: 0.9686, acc: 93.28% / test_loss: 0.9781, test_acc: 92.34%
Epoch 114 - loss: 0.9698, acc: 93.17% / test_loss: 0.9757, test_acc: 92.56%
Epoch 115 - loss: 0.9684, acc: 93.33% / test_loss: 0.9723, test_acc: 92.93%
Epoch 116 - loss: 0.9682, acc: 93.33% / test_loss: 0.9741, test_acc: 92.71%
Epoch 117 - loss: 0.9683, acc: 93.31% / test_loss: 0.9717, test_acc: 92.93%
Epoch 118 - loss: 0.9681, acc: 93.31% / test_loss: 0.9763, test_acc: 92.50%
Epoch 119 - loss: 0.9692, acc: 93.23% / test_loss: 0.9742, test_acc: 92.74%
Epoch 120 - loss: 0.9696, acc: 93.20% / test_loss: 0.9714, test_acc: 92.99%
Epoch 121 - loss: 0.9689, acc: 93.25% / test_loss: 0.9720, test_acc: 92.91%
Epoch 122 - loss: 0.9683, acc: 93.30% / test_loss: 0.9723, test_acc: 92.88%
Epoch 123 - loss: 0.9682, acc: 93.31% / test_loss: 0.9727, test_acc: 92.90%
Epoch 124 - loss: 0.9681, acc: 93.33% / test_loss: 0.9708, test_acc: 93.05%
Epoch 125 - loss: 0.9693, acc: 93.19% / test_loss: 0.9707, test_acc: 93.05%
Epoch 126 - loss: 0.9672, acc: 93.39% / test_loss: 0.9729, test_acc: 92.83%
Epoch 127 - loss: 0.9671, acc: 93.43% / test_loss: 0.9722, test_acc: 92.90%
Epoch 128 - loss: 0.9676, acc: 93.39% / test_loss: 0.9735, test_acc: 92.80%
Epoch 129 - loss: 0.9681, acc: 93.33% / test_loss: 0.9738, test_acc: 92.72%
Epoch 130 - loss: 0.9686, acc: 93.30% / test_loss: 0.9710, test_acc: 93.05%
Epoch 131 - loss: 0.9674, acc: 93.39% / test_loss: 0.9753, test_acc: 92.65%
Epoch 132 - loss: 0.9676, acc: 93.38% / test_loss: 0.9712, test_acc: 93.00%
Epoch 133 - loss: 0.9659, acc: 93.53% / test_loss: 0.9705, test_acc: 93.09%
Epoch 134 - loss: 0.9654, acc: 93.57% / test_loss: 0.9699, test_acc: 93.08%
Epoch 135 - loss: 0.9665, acc: 93.46% / test_loss: 0.9719, test_acc: 92.93%
Epoch 136 - loss: 0.9672, acc: 93.39% / test_loss: 0.9708, test_acc: 93.00%
Epoch 137 - loss: 0.9667, acc: 93.45% / test_loss: 0.9721, test_acc: 92.91%
Epoch 138 - loss: 0.9675, acc: 93.39% / test_loss: 0.9715, test_acc: 92.96%
Epoch 139 - loss: 0.9656, acc: 93.57% / test_loss: 0.9699, test_acc: 93.19%
Epoch 140 - loss: 0.9662, acc: 93.53% / test_loss: 0.9705, test_acc: 93.08%
Epoch 141 - loss: 0.9669, acc: 93.42% / test_loss: 0.9719, test_acc: 92.94%
Epoch 142 - loss: 0.9654, acc: 93.57% / test_loss: 0.9706, test_acc: 93.05%
Epoch 143 - loss: 0.9659, acc: 93.53% / test_loss: 0.9717, test_acc: 92.96%
Epoch 144 - loss: 0.9658, acc: 93.52% / test_loss: 0.9705, test_acc: 93.03%
Epoch 145 - loss: 0.9670, acc: 93.45% / test_loss: 0.9717, test_acc: 92.95%
Epoch 146 - loss: 0.9659, acc: 93.54% / test_loss: 0.9697, test_acc: 93.20%
Epoch 147 - loss: 0.9668, acc: 93.43% / test_loss: 0.9716, test_acc: 92.95%
Epoch 148 - loss: 0.9660, acc: 93.50% / test_loss: 0.9701, test_acc: 93.11%
Epoch 149 - loss: 0.9664, acc: 93.49% / test_loss: 0.9702, test_acc: 93.08%
Epoch 150 - loss: 0.9670, acc: 93.40% / test_loss: 0.9690, test_acc: 93.23%
Epoch 151 - loss: 0.9651, acc: 93.62% / test_loss: 0.9693, test_acc: 93.24%
Epoch 152 - loss: 0.9650, acc: 93.63% / test_loss: 0.9701, test_acc: 93.17%
Epoch 153 - loss: 0.9650, acc: 93.61% / test_loss: 0.9695, test_acc: 93.17%
Epoch 154 - loss: 0.9647, acc: 93.67% / test_loss: 0.9703, test_acc: 93.07%
Epoch 155 - loss: 0.9652, acc: 93.62% / test_loss: 0.9698, test_acc: 93.13%
Epoch 156 - loss: 0.9651, acc: 93.61% / test_loss: 0.9691, test_acc: 93.19%
Epoch 157 - loss: 0.9649, acc: 93.62% / test_loss: 0.9694, test_acc: 93.16%
Epoch 158 - loss: 0.9681, acc: 93.30% / test_loss: 0.9737, test_acc: 92.74%
Epoch 159 - loss: 0.9657, acc: 93.55% / test_loss: 0.9698, test_acc: 93.13%
Epoch 160 - loss: 0.9655, acc: 93.57% / test_loss: 0.9700, test_acc: 93.13%
Epoch 161 - loss: 0.9650, acc: 93.61% / test_loss: 0.9701, test_acc: 93.14%
Epoch 162 - loss: 0.9640, acc: 93.70% / test_loss: 0.9694, test_acc: 93.16%
Epoch 163 - loss: 0.9656, acc: 93.54% / test_loss: 0.9714, test_acc: 93.00%
Epoch 164 - loss: 0.9648, acc: 93.64% / test_loss: 0.9692, test_acc: 93.24%
Epoch 165 - loss: 0.9647, acc: 93.67% / test_loss: 0.9700, test_acc: 93.15%
Epoch 166 - loss: 0.9646, acc: 93.67% / test_loss: 0.9710, test_acc: 93.01%
Epoch 167 - loss: 0.9651, acc: 93.60% / test_loss: 0.9704, test_acc: 93.06%
Epoch 168 - loss: 0.9649, acc: 93.61% / test_loss: 0.9717, test_acc: 92.96%
Epoch 169 - loss: 0.9636, acc: 93.74% / test_loss: 0.9703, test_acc: 93.12%
Epoch 170 - loss: 0.9646, acc: 93.63% / test_loss: 0.9694, test_acc: 93.16%
Epoch 171 - loss: 0.9648, acc: 93.60% / test_loss: 0.9692, test_acc: 93.19%
Epoch 172 - loss: 0.9632, acc: 93.79% / test_loss: 0.9679, test_acc: 93.36%
Epoch 173 - loss: 0.9625, acc: 93.82% / test_loss: 0.9675, test_acc: 93.36%
Epoch 174 - loss: 0.9633, acc: 93.77% / test_loss: 0.9687, test_acc: 93.27%
Epoch 175 - loss: 0.9639, acc: 93.70% / test_loss: 0.9698, test_acc: 93.16%
Epoch 176 - loss: 0.9642, acc: 93.70% / test_loss: 0.9674, test_acc: 93.36%
Epoch 177 - loss: 0.9635, acc: 93.77% / test_loss: 0.9699, test_acc: 93.06%
Epoch 178 - loss: 0.9640, acc: 93.70% / test_loss: 0.9683, test_acc: 93.27%
Epoch 179 - loss: 0.9628, acc: 93.84% / test_loss: 0.9686, test_acc: 93.21%
Epoch 180 - loss: 0.9637, acc: 93.71% / test_loss: 0.9681, test_acc: 93.32%
Epoch 181 - loss: 0.9652, acc: 93.58% / test_loss: 0.9680, test_acc: 93.34%
Epoch 182 - loss: 0.9628, acc: 93.82% / test_loss: 0.9723, test_acc: 92.86%
Epoch 183 - loss: 0.9638, acc: 93.72% / test_loss: 0.9672, test_acc: 93.37%
Epoch 184 - loss: 0.9625, acc: 93.84% / test_loss: 0.9678, test_acc: 93.30%
Epoch 185 - loss: 0.9643, acc: 93.66% / test_loss: 0.9752, test_acc: 92.53%
Epoch 186 - loss: 0.9638, acc: 93.74% / test_loss: 0.9687, test_acc: 93.24%
Epoch 187 - loss: 0.9632, acc: 93.76% / test_loss: 0.9673, test_acc: 93.36%
Epoch 188 - loss: 0.9635, acc: 93.74% / test_loss: 0.9696, test_acc: 93.14%
Epoch 189 - loss: 0.9631, acc: 93.80% / test_loss: 0.9683, test_acc: 93.24%
Epoch 190 - loss: 0.9631, acc: 93.80% / test_loss: 0.9672, test_acc: 93.37%
Epoch 191 - loss: 0.9631, acc: 93.79% / test_loss: 0.9682, test_acc: 93.27%
Epoch 192 - loss: 0.9625, acc: 93.86% / test_loss: 0.9673, test_acc: 93.36%
Epoch 193 - loss: 0.9622, acc: 93.88% / test_loss: 0.9683, test_acc: 93.29%
Epoch 194 - loss: 0.9633, acc: 93.77% / test_loss: 0.9683, test_acc: 93.28%
Epoch 195 - loss: 0.9617, acc: 93.88% / test_loss: 0.9665, test_acc: 93.45%
Epoch 196 - loss: 0.9635, acc: 93.74% / test_loss: 0.9697, test_acc: 93.17%
Epoch 197 - loss: 0.9640, acc: 93.69% / test_loss: 0.9693, test_acc: 93.13%
Epoch 198 - loss: 0.9622, acc: 93.88% / test_loss: 0.9688, test_acc: 93.23%
Epoch 199 - loss: 0.9624, acc: 93.87% / test_loss: 0.9674, test_acc: 93.37%
Epoch 200 - loss: 0.9624, acc: 93.85% / test_loss: 0.9665, test_acc: 93.45%
Epoch 201 - loss: 0.9626, acc: 93.82% / test_loss: 0.9665, test_acc: 93.45%
Epoch 202 - loss: 0.9617, acc: 93.90% / test_loss: 0.9693, test_acc: 93.12%
Epoch 203 - loss: 0.9621, acc: 93.93% / test_loss: 0.9668, test_acc: 93.39%
Epoch 204 - loss: 0.9614, acc: 93.94% / test_loss: 0.9662, test_acc: 93.51%
Epoch 205 - loss: 0.9616, acc: 93.94% / test_loss: 0.9673, test_acc: 93.36%
Epoch 206 - loss: 0.9617, acc: 93.91% / test_loss: 0.9689, test_acc: 93.20%
Epoch 207 - loss: 0.9621, acc: 93.90% / test_loss: 0.9678, test_acc: 93.30%
Epoch 208 - loss: 0.9623, acc: 93.85% / test_loss: 0.9671, test_acc: 93.39%
Epoch 209 - loss: 0.9614, acc: 93.94% / test_loss: 0.9691, test_acc: 93.19%
Epoch 210 - loss: 0.9621, acc: 93.88% / test_loss: 0.9679, test_acc: 93.27%
Epoch 211 - loss: 0.9618, acc: 93.91% / test_loss: 0.9682, test_acc: 93.27%
Epoch 212 - loss: 0.9638, acc: 93.72% / test_loss: 0.9695, test_acc: 93.20%
Epoch 213 - loss: 0.9623, acc: 93.87% / test_loss: 0.9691, test_acc: 93.18%
Epoch 214 - loss: 0.9613, acc: 93.95% / test_loss: 0.9664, test_acc: 93.44%
Epoch 215 - loss: 0.9613, acc: 93.95% / test_loss: 0.9672, test_acc: 93.40%
Epoch 216 - loss: 0.9616, acc: 93.95% / test_loss: 0.9688, test_acc: 93.24%
Epoch 217 - loss: 0.9619, acc: 93.92% / test_loss: 0.9675, test_acc: 93.35%
Epoch 218 - loss: 0.9622, acc: 93.88% / test_loss: 0.9681, test_acc: 93.32%
Epoch 219 - loss: 0.9630, acc: 93.76% / test_loss: 0.9686, test_acc: 93.21%
Epoch 220 - loss: 0.9618, acc: 93.90% / test_loss: 0.9669, test_acc: 93.40%
Epoch 221 - loss: 0.9609, acc: 93.98% / test_loss: 0.9674, test_acc: 93.37%
Epoch 222 - loss: 0.9616, acc: 93.94% / test_loss: 0.9660, test_acc: 93.50%
Epoch 223 - loss: 0.9609, acc: 93.99% / test_loss: 0.9670, test_acc: 93.42%
Epoch 224 - loss: 0.9621, acc: 93.86% / test_loss: 0.9676, test_acc: 93.30%
Epoch 225 - loss: 0.9608, acc: 94.01% / test_loss: 0.9663, test_acc: 93.45%
Epoch 226 - loss: 0.9612, acc: 93.98% / test_loss: 0.9677, test_acc: 93.28%
Epoch 227 - loss: 0.9605, acc: 94.01% / test_loss: 0.9665, test_acc: 93.42%
Epoch 228 - loss: 0.9618, acc: 93.89% / test_loss: 0.9673, test_acc: 93.36%
Epoch 229 - loss: 0.9622, acc: 93.85% / test_loss: 0.9685, test_acc: 93.27%
Epoch 230 - loss: 0.9623, acc: 93.88% / test_loss: 0.9669, test_acc: 93.41%
Epoch 231 - loss: 0.9626, acc: 93.84% / test_loss: 0.9687, test_acc: 93.24%
Epoch 232 - loss: 0.9617, acc: 93.92% / test_loss: 0.9671, test_acc: 93.39%
Epoch 233 - loss: 0.9637, acc: 93.73% / test_loss: 0.9681, test_acc: 93.29%
Epoch 234 - loss: 0.9614, acc: 93.96% / test_loss: 0.9662, test_acc: 93.50%
Epoch 235 - loss: 0.9613, acc: 93.94% / test_loss: 0.9676, test_acc: 93.32%
Epoch 236 - loss: 0.9616, acc: 93.93% / test_loss: 0.9678, test_acc: 93.33%
Epoch 237 - loss: 0.9614, acc: 93.94% / test_loss: 0.9667, test_acc: 93.41%
Epoch 238 - loss: 0.9609, acc: 93.99% / test_loss: 0.9668, test_acc: 93.43%
Epoch 239 - loss: 0.9610, acc: 93.97% / test_loss: 0.9721, test_acc: 92.89%
Epoch 240 - loss: 0.9615, acc: 93.93% / test_loss: 0.9663, test_acc: 93.46%
Epoch 241 - loss: 0.9612, acc: 93.94% / test_loss: 0.9669, test_acc: 93.40%
Epoch 242 - loss: 0.9628, acc: 93.80% / test_loss: 0.9713, test_acc: 92.99%
Epoch 243 - loss: 0.9623, acc: 93.86% / test_loss: 0.9669, test_acc: 93.41%
Epoch 244 - loss: 0.9613, acc: 93.93% / test_loss: 0.9670, test_acc: 93.42%
Epoch 245 - loss: 0.9614, acc: 93.95% / test_loss: 0.9662, test_acc: 93.46%
Epoch 246 - loss: 0.9614, acc: 93.94% / test_loss: 0.9669, test_acc: 93.42%
Epoch 247 - loss: 0.9613, acc: 93.94% / test_loss: 0.9653, test_acc: 93.57%
Epoch 248 - loss: 0.9613, acc: 93.98% / test_loss: 0.9658, test_acc: 93.48%
Epoch 249 - loss: 0.9609, acc: 94.00% / test_loss: 0.9671, test_acc: 93.39%
Epoch 250 - loss: 0.9610, acc: 93.98% / test_loss: 0.9692, test_acc: 93.17%
Epoch 251 - loss: 0.9605, acc: 94.04% / test_loss: 0.9654, test_acc: 93.51%
Epoch 252 - loss: 0.9601, acc: 94.06% / test_loss: 0.9657, test_acc: 93.51%
Epoch 253 - loss: 0.9603, acc: 94.03% / test_loss: 0.9651, test_acc: 93.59%
Epoch 254 - loss: 0.9607, acc: 94.01% / test_loss: 0.9670, test_acc: 93.40%
Epoch 255 - loss: 0.9612, acc: 93.98% / test_loss: 0.9661, test_acc: 93.48%
Epoch 256 - loss: 0.9620, acc: 93.88% / test_loss: 0.9684, test_acc: 93.24%
Epoch 257 - loss: 0.9611, acc: 93.98% / test_loss: 0.9685, test_acc: 93.24%
Epoch 258 - loss: 0.9616, acc: 93.91% / test_loss: 0.9679, test_acc: 93.33%
Epoch 259 - loss: 0.9624, acc: 93.82% / test_loss: 0.9676, test_acc: 93.31%
Epoch 260 - loss: 0.9622, acc: 93.87% / test_loss: 0.9673, test_acc: 93.37%
Epoch 261 - loss: 0.9602, acc: 94.07% / test_loss: 0.9657, test_acc: 93.51%
Epoch 262 - loss: 0.9612, acc: 93.96% / test_loss: 0.9655, test_acc: 93.51%
Epoch 263 - loss: 0.9616, acc: 93.91% / test_loss: 0.9709, test_acc: 93.01%
Epoch 264 - loss: 0.9614, acc: 93.94% / test_loss: 0.9657, test_acc: 93.50%
Epoch 265 - loss: 0.9604, acc: 94.02% / test_loss: 0.9669, test_acc: 93.42%
Epoch 266 - loss: 0.9400, acc: 96.31% / test_loss: 0.9256, test_acc: 97.89%
Epoch 267 - loss: 0.9174, acc: 98.74% / test_loss: 0.9248, test_acc: 97.97%
Epoch 268 - loss: 0.9178, acc: 98.69% / test_loss: 0.9238, test_acc: 98.08%
Epoch 269 - loss: 0.9186, acc: 98.62% / test_loss: 0.9257, test_acc: 97.90%
Epoch 270 - loss: 0.9184, acc: 98.61% / test_loss: 0.9233, test_acc: 98.16%
Epoch 271 - loss: 0.9165, acc: 98.82% / test_loss: 0.9241, test_acc: 98.04%
Epoch 272 - loss: 0.9173, acc: 98.74% / test_loss: 0.9234, test_acc: 98.14%
Epoch 273 - loss: 0.9187, acc: 98.60% / test_loss: 0.9262, test_acc: 97.86%
Epoch 274 - loss: 0.9183, acc: 98.63% / test_loss: 0.9258, test_acc: 97.89%
Epoch 275 - loss: 0.9182, acc: 98.67% / test_loss: 0.9230, test_acc: 98.17%
Epoch 276 - loss: 0.9169, acc: 98.78% / test_loss: 0.9225, test_acc: 98.20%
Epoch 277 - loss: 0.9167, acc: 98.79% / test_loss: 0.9279, test_acc: 97.67%
Epoch 278 - loss: 0.9176, acc: 98.70% / test_loss: 0.9242, test_acc: 98.05%
Epoch 279 - loss: 0.9181, acc: 98.66% / test_loss: 0.9235, test_acc: 98.12%
Epoch 280 - loss: 0.9163, acc: 98.83% / test_loss: 0.9229, test_acc: 98.16%
Epoch 281 - loss: 0.9165, acc: 98.80% / test_loss: 0.9253, test_acc: 97.95%
Epoch 282 - loss: 0.9182, acc: 98.65% / test_loss: 0.9290, test_acc: 97.58%
Epoch 283 - loss: 0.9171, acc: 98.77% / test_loss: 0.9237, test_acc: 98.10%
Epoch 284 - loss: 0.9169, acc: 98.78% / test_loss: 0.9245, test_acc: 98.01%
Epoch 285 - loss: 0.9167, acc: 98.78% / test_loss: 0.9236, test_acc: 98.09%
Epoch 286 - loss: 0.9169, acc: 98.78% / test_loss: 0.9242, test_acc: 98.04%
Epoch 287 - loss: 0.9160, acc: 98.87% / test_loss: 0.9234, test_acc: 98.12%
Epoch 288 - loss: 0.9171, acc: 98.77% / test_loss: 0.9244, test_acc: 98.02%
Epoch 289 - loss: 0.9171, acc: 98.78% / test_loss: 0.9235, test_acc: 98.14%
Epoch 290 - loss: 0.9169, acc: 98.77% / test_loss: 0.9246, test_acc: 98.01%
Epoch 291 - loss: 0.9177, acc: 98.70% / test_loss: 0.9245, test_acc: 98.00%
Epoch 292 - loss: 0.9171, acc: 98.76% / test_loss: 0.9257, test_acc: 97.93%
Epoch 293 - loss: 0.9177, acc: 98.68% / test_loss: 0.9226, test_acc: 98.23%
Epoch 294 - loss: 0.9145, acc: 99.03% / test_loss: 0.9242, test_acc: 98.03%
Epoch 295 - loss: 0.9133, acc: 99.15% / test_loss: 0.9209, test_acc: 98.38%
Epoch 296 - loss: 0.9143, acc: 99.09% / test_loss: 0.9253, test_acc: 97.98%
Epoch 297 - loss: 0.9123, acc: 99.28% / test_loss: 0.9211, test_acc: 98.40%
Epoch 298 - loss: 0.9132, acc: 99.16% / test_loss: 0.9212, test_acc: 98.33%
Epoch 299 - loss: 0.9144, acc: 99.04% / test_loss: 0.9212, test_acc: 98.33%
Epoch 300 - loss: 0.9129, acc: 99.19% / test_loss: 0.9219, test_acc: 98.31%
Epoch 301 - loss: 0.9133, acc: 99.15% / test_loss: 0.9237, test_acc: 98.10%
Epoch 302 - loss: 0.9135, acc: 99.15% / test_loss: 0.9259, test_acc: 97.86%
Epoch 303 - loss: 0.9148, acc: 99.01% / test_loss: 0.9216, test_acc: 98.30%
Epoch 304 - loss: 0.9136, acc: 99.13% / test_loss: 0.9231, test_acc: 98.18%
Epoch 305 - loss: 0.9130, acc: 99.19% / test_loss: 0.9217, test_acc: 98.30%
Epoch 306 - loss: 0.9140, acc: 99.07% / test_loss: 0.9241, test_acc: 98.04%
Epoch 307 - loss: 0.9125, acc: 99.24% / test_loss: 0.9208, test_acc: 98.41%
Epoch 308 - loss: 0.9122, acc: 99.26% / test_loss: 0.9216, test_acc: 98.32%
Epoch 309 - loss: 0.9119, acc: 99.31% / test_loss: 0.9204, test_acc: 98.45%
Epoch 310 - loss: 0.9131, acc: 99.18% / test_loss: 0.9216, test_acc: 98.32%
Epoch 311 - loss: 0.9143, acc: 99.05% / test_loss: 0.9214, test_acc: 98.35%
Epoch 312 - loss: 0.9128, acc: 99.20% / test_loss: 0.9224, test_acc: 98.24%
Epoch 313 - loss: 0.9128, acc: 99.21% / test_loss: 0.9206, test_acc: 98.42%
Epoch 314 - loss: 0.9131, acc: 99.17% / test_loss: 0.9243, test_acc: 98.03%
Epoch 315 - loss: 0.9129, acc: 99.20% / test_loss: 0.9212, test_acc: 98.37%
Epoch 316 - loss: 0.9125, acc: 99.24% / test_loss: 0.9262, test_acc: 97.86%
Epoch 317 - loss: 0.9133, acc: 99.16% / test_loss: 0.9206, test_acc: 98.39%
Epoch 318 - loss: 0.9132, acc: 99.15% / test_loss: 0.9206, test_acc: 98.43%
Epoch 319 - loss: 0.9130, acc: 99.18% / test_loss: 0.9215, test_acc: 98.35%
Epoch 320 - loss: 0.9125, acc: 99.24% / test_loss: 0.9210, test_acc: 98.38%
Epoch 321 - loss: 0.9133, acc: 99.15% / test_loss: 0.9215, test_acc: 98.35%
Epoch 322 - loss: 0.9130, acc: 99.18% / test_loss: 0.9210, test_acc: 98.35%
Epoch 323 - loss: 0.9123, acc: 99.27% / test_loss: 0.9204, test_acc: 98.44%
Epoch 324 - loss: 0.9128, acc: 99.21% / test_loss: 0.9222, test_acc: 98.27%
Epoch 325 - loss: 0.9139, acc: 99.09% / test_loss: 0.9210, test_acc: 98.40%
Epoch 326 - loss: 0.9130, acc: 99.18% / test_loss: 0.9209, test_acc: 98.37%
Epoch 327 - loss: 0.9117, acc: 99.32% / test_loss: 0.9216, test_acc: 98.32%
Epoch 328 - loss: 0.9151, acc: 98.96% / test_loss: 0.9231, test_acc: 98.19%
Epoch 329 - loss: 0.9123, acc: 99.25% / test_loss: 0.9215, test_acc: 98.33%
Epoch 330 - loss: 0.9125, acc: 99.24% / test_loss: 0.9217, test_acc: 98.33%
Epoch 331 - loss: 0.9124, acc: 99.26% / test_loss: 0.9211, test_acc: 98.38%
Epoch 332 - loss: 0.9116, acc: 99.34% / test_loss: 0.9206, test_acc: 98.40%
Epoch 333 - loss: 0.9126, acc: 99.24% / test_loss: 0.9214, test_acc: 98.37%
Epoch 334 - loss: 0.9132, acc: 99.15% / test_loss: 0.9209, test_acc: 98.38%
Epoch 335 - loss: 0.9125, acc: 99.24% / test_loss: 0.9212, test_acc: 98.32%
Epoch 336 - loss: 0.9132, acc: 99.16% / test_loss: 0.9201, test_acc: 98.47%
Epoch 337 - loss: 0.9126, acc: 99.23% / test_loss: 0.9201, test_acc: 98.50%
Epoch 338 - loss: 0.9131, acc: 99.17% / test_loss: 0.9208, test_acc: 98.39%
Epoch 339 - loss: 0.9119, acc: 99.31% / test_loss: 0.9204, test_acc: 98.42%
Epoch 340 - loss: 0.9120, acc: 99.29% / test_loss: 0.9206, test_acc: 98.44%
Epoch 341 - loss: 0.9121, acc: 99.28% / test_loss: 0.9213, test_acc: 98.32%
Epoch 342 - loss: 0.9136, acc: 99.12% / test_loss: 0.9217, test_acc: 98.29%
Epoch 343 - loss: 0.9133, acc: 99.17% / test_loss: 0.9211, test_acc: 98.37%
Epoch 344 - loss: 0.9124, acc: 99.26% / test_loss: 0.9203, test_acc: 98.40%
Epoch 345 - loss: 0.9121, acc: 99.28% / test_loss: 0.9233, test_acc: 98.13%
Epoch 346 - loss: 0.9126, acc: 99.23% / test_loss: 0.9197, test_acc: 98.51%
Epoch 347 - loss: 0.9116, acc: 99.32% / test_loss: 0.9210, test_acc: 98.35%
Epoch 348 - loss: 0.9137, acc: 99.12% / test_loss: 0.9231, test_acc: 98.16%
Epoch 349 - loss: 0.9129, acc: 99.18% / test_loss: 0.9200, test_acc: 98.47%
Epoch 350 - loss: 0.9122, acc: 99.24% / test_loss: 0.9210, test_acc: 98.37%
Epoch 351 - loss: 0.9124, acc: 99.23% / test_loss: 0.9201, test_acc: 98.46%
Epoch 352 - loss: 0.9119, acc: 99.31% / test_loss: 0.9228, test_acc: 98.23%
Epoch 353 - loss: 0.9128, acc: 99.21% / test_loss: 0.9229, test_acc: 98.19%
Epoch 354 - loss: 0.9126, acc: 99.24% / test_loss: 0.9232, test_acc: 98.14%
Epoch 355 - loss: 0.9128, acc: 99.21% / test_loss: 0.9229, test_acc: 98.21%
Epoch 356 - loss: 0.9127, acc: 99.22% / test_loss: 0.9206, test_acc: 98.43%
Epoch 357 - loss: 0.9125, acc: 99.24% / test_loss: 0.9200, test_acc: 98.50%
Epoch 358 - loss: 0.9141, acc: 99.06% / test_loss: 0.9215, test_acc: 98.32%
Epoch 359 - loss: 0.9119, acc: 99.30% / test_loss: 0.9214, test_acc: 98.35%
Epoch 360 - loss: 0.9127, acc: 99.22% / test_loss: 0.9228, test_acc: 98.23%
Epoch 361 - loss: 0.9124, acc: 99.25% / test_loss: 0.9207, test_acc: 98.39%
Epoch 362 - loss: 0.9117, acc: 99.31% / test_loss: 0.9203, test_acc: 98.42%
Epoch 363 - loss: 0.9122, acc: 99.27% / test_loss: 0.9199, test_acc: 98.47%
Epoch 364 - loss: 0.9135, acc: 99.15% / test_loss: 0.9225, test_acc: 98.22%
Epoch 365 - loss: 0.9131, acc: 99.17% / test_loss: 0.9205, test_acc: 98.41%
Epoch 366 - loss: 0.9142, acc: 99.06% / test_loss: 0.9208, test_acc: 98.41%
Epoch 367 - loss: 0.9120, acc: 99.28% / test_loss: 0.9230, test_acc: 98.17%
Epoch 368 - loss: 0.9125, acc: 99.24% / test_loss: 0.9208, test_acc: 98.39%
Epoch 369 - loss: 0.9133, acc: 99.14% / test_loss: 0.9216, test_acc: 98.30%
Epoch 370 - loss: 0.9125, acc: 99.23% / test_loss: 0.9209, test_acc: 98.38%
Epoch 371 - loss: 0.9121, acc: 99.28% / test_loss: 0.9210, test_acc: 98.36%
Epoch 372 - loss: 0.9141, acc: 99.06% / test_loss: 0.9208, test_acc: 98.42%
Epoch 373 - loss: 0.9124, acc: 99.25% / test_loss: 0.9203, test_acc: 98.44%
Epoch 374 - loss: 0.9117, acc: 99.31% / test_loss: 0.9196, test_acc: 98.51%
Epoch 375 - loss: 0.9116, acc: 99.32% / test_loss: 0.9225, test_acc: 98.23%
Epoch 376 - loss: 0.9116, acc: 99.32% / test_loss: 0.9205, test_acc: 98.46%
Epoch 377 - loss: 0.9126, acc: 99.23% / test_loss: 0.9210, test_acc: 98.36%
Epoch 378 - loss: 0.9140, acc: 99.07% / test_loss: 0.9225, test_acc: 98.23%
Epoch 379 - loss: 0.9136, acc: 99.12% / test_loss: 0.9213, test_acc: 98.35%
Epoch 380 - loss: 0.9137, acc: 99.12% / test_loss: 0.9219, test_acc: 98.32%
Epoch 381 - loss: 0.9128, acc: 99.21% / test_loss: 0.9220, test_acc: 98.29%
Epoch 382 - loss: 0.9128, acc: 99.21% / test_loss: 0.9209, test_acc: 98.38%
Epoch 383 - loss: 0.9121, acc: 99.28% / test_loss: 0.9218, test_acc: 98.30%
Epoch 384 - loss: 0.9120, acc: 99.28% / test_loss: 0.9216, test_acc: 98.33%
Epoch 385 - loss: 0.9121, acc: 99.27% / test_loss: 0.9204, test_acc: 98.44%
Epoch 386 - loss: 0.9119, acc: 99.31% / test_loss: 0.9227, test_acc: 98.20%
Epoch 387 - loss: 0.9129, acc: 99.18% / test_loss: 0.9220, test_acc: 98.26%
Epoch 388 - loss: 0.9140, acc: 99.07% / test_loss: 0.9214, test_acc: 98.35%
Epoch 389 - loss: 0.9122, acc: 99.27% / test_loss: 0.9198, test_acc: 98.49%
Epoch 390 - loss: 0.9115, acc: 99.33% / test_loss: 0.9201, test_acc: 98.45%
Epoch 391 - loss: 0.9119, acc: 99.29% / test_loss: 0.9205, test_acc: 98.41%
Epoch 392 - loss: 0.9116, acc: 99.34% / test_loss: 0.9204, test_acc: 98.45%
Epoch 393 - loss: 0.9125, acc: 99.24% / test_loss: 0.9251, test_acc: 97.98%
Epoch 394 - loss: 0.9152, acc: 98.97% / test_loss: 0.9218, test_acc: 98.30%
Epoch 395 - loss: 0.9149, acc: 98.96% / test_loss: 0.9230, test_acc: 98.16%
Epoch 396 - loss: 0.9127, acc: 99.21% / test_loss: 0.9226, test_acc: 98.24%
Epoch 397 - loss: 0.9120, acc: 99.28% / test_loss: 0.9210, test_acc: 98.38%
Epoch 398 - loss: 0.9124, acc: 99.24% / test_loss: 0.9217, test_acc: 98.32%
Epoch 399 - loss: 0.9120, acc: 99.28% / test_loss: 0.9206, test_acc: 98.40%
Epoch 400 - loss: 0.9132, acc: 99.15% / test_loss: 0.9220, test_acc: 98.28%
Best test accuracy 98.51% in epoch 346.
----------------------------------------------------------------------------------------------------
Run 8
Epoch 1 - loss: 1.3461, acc: 55.67% / test_loss: 1.1790, test_acc: 72.85%
Epoch 2 - loss: 1.1469, acc: 76.53% / test_loss: 1.0799, test_acc: 83.56%
Epoch 3 - loss: 1.0654, acc: 84.58% / test_loss: 1.0582, test_acc: 85.41%
Epoch 4 - loss: 1.0503, acc: 85.70% / test_loss: 1.0367, test_acc: 86.92%
Epoch 5 - loss: 1.0421, acc: 86.48% / test_loss: 1.0332, test_acc: 87.21%
Epoch 6 - loss: 1.0365, acc: 86.89% / test_loss: 1.0269, test_acc: 88.00%
Epoch 7 - loss: 1.0318, acc: 87.41% / test_loss: 1.0298, test_acc: 87.74%
Epoch 8 - loss: 1.0270, acc: 87.83% / test_loss: 1.0185, test_acc: 88.76%
Epoch 9 - loss: 1.0264, acc: 87.87% / test_loss: 1.0180, test_acc: 88.76%
Epoch 10 - loss: 1.0227, acc: 88.21% / test_loss: 1.0125, test_acc: 89.17%
Epoch 11 - loss: 1.0195, acc: 88.46% / test_loss: 1.0148, test_acc: 88.92%
Epoch 12 - loss: 1.0186, acc: 88.55% / test_loss: 1.0163, test_acc: 88.79%
Epoch 13 - loss: 1.0171, acc: 88.69% / test_loss: 1.0154, test_acc: 88.89%
Epoch 14 - loss: 1.0155, acc: 88.89% / test_loss: 1.0098, test_acc: 89.39%
Epoch 15 - loss: 1.0155, acc: 88.80% / test_loss: 1.0095, test_acc: 89.41%
Epoch 16 - loss: 1.0132, acc: 89.00% / test_loss: 1.0077, test_acc: 89.55%
Epoch 17 - loss: 1.0132, acc: 89.07% / test_loss: 1.0065, test_acc: 89.70%
Epoch 18 - loss: 1.0131, acc: 88.99% / test_loss: 1.0102, test_acc: 89.32%
Epoch 19 - loss: 1.0130, acc: 89.08% / test_loss: 1.0060, test_acc: 89.65%
Epoch 20 - loss: 1.0115, acc: 89.20% / test_loss: 1.0057, test_acc: 89.80%
Epoch 21 - loss: 1.0097, acc: 89.36% / test_loss: 1.0084, test_acc: 89.72%
Epoch 22 - loss: 1.0079, acc: 89.57% / test_loss: 1.0042, test_acc: 89.97%
Epoch 23 - loss: 1.0084, acc: 89.50% / test_loss: 1.0037, test_acc: 90.09%
Epoch 24 - loss: 1.0066, acc: 89.66% / test_loss: 1.0054, test_acc: 89.75%
Epoch 25 - loss: 1.0057, acc: 89.74% / test_loss: 1.0123, test_acc: 89.41%
Epoch 26 - loss: 1.0053, acc: 89.82% / test_loss: 1.0015, test_acc: 90.17%
Epoch 27 - loss: 1.0046, acc: 89.85% / test_loss: 1.0025, test_acc: 90.12%
Epoch 28 - loss: 1.0030, acc: 89.97% / test_loss: 1.0001, test_acc: 90.22%
Epoch 29 - loss: 1.0049, acc: 89.85% / test_loss: 1.0014, test_acc: 90.15%
Epoch 30 - loss: 1.0028, acc: 90.03% / test_loss: 0.9991, test_acc: 90.43%
Epoch 31 - loss: 1.0030, acc: 90.01% / test_loss: 0.9987, test_acc: 90.37%
Epoch 32 - loss: 1.0025, acc: 90.06% / test_loss: 0.9985, test_acc: 90.41%
Epoch 33 - loss: 1.0014, acc: 90.18% / test_loss: 0.9982, test_acc: 90.43%
Epoch 34 - loss: 1.0035, acc: 89.92% / test_loss: 0.9987, test_acc: 90.37%
Epoch 35 - loss: 1.0043, acc: 89.88% / test_loss: 0.9982, test_acc: 90.38%
Epoch 36 - loss: 1.0018, acc: 90.09% / test_loss: 0.9980, test_acc: 90.44%
Epoch 37 - loss: 1.0011, acc: 90.11% / test_loss: 1.0001, test_acc: 90.39%
Epoch 38 - loss: 1.0009, acc: 90.18% / test_loss: 0.9993, test_acc: 90.32%
Epoch 39 - loss: 1.0001, acc: 90.25% / test_loss: 0.9986, test_acc: 90.35%
Epoch 40 - loss: 0.9990, acc: 90.33% / test_loss: 0.9984, test_acc: 90.39%
Epoch 41 - loss: 0.9996, acc: 90.32% / test_loss: 0.9996, test_acc: 90.27%
Epoch 42 - loss: 0.9991, acc: 90.34% / test_loss: 0.9968, test_acc: 90.56%
Epoch 43 - loss: 0.9992, acc: 90.33% / test_loss: 0.9975, test_acc: 90.54%
Epoch 44 - loss: 0.9954, acc: 90.71% / test_loss: 0.9927, test_acc: 90.91%
Epoch 45 - loss: 0.9949, acc: 90.72% / test_loss: 0.9925, test_acc: 90.96%
Epoch 46 - loss: 0.9916, acc: 91.09% / test_loss: 0.9974, test_acc: 90.74%
Epoch 47 - loss: 0.9894, acc: 91.29% / test_loss: 0.9904, test_acc: 91.30%
Epoch 48 - loss: 0.9879, acc: 91.46% / test_loss: 0.9870, test_acc: 91.52%
Epoch 49 - loss: 0.9851, acc: 91.75% / test_loss: 0.9847, test_acc: 91.73%
Epoch 50 - loss: 0.9827, acc: 91.95% / test_loss: 0.9833, test_acc: 91.85%
Epoch 51 - loss: 0.9824, acc: 91.98% / test_loss: 0.9871, test_acc: 91.57%
Epoch 52 - loss: 0.9813, acc: 92.07% / test_loss: 0.9818, test_acc: 91.97%
Epoch 53 - loss: 0.9808, acc: 92.13% / test_loss: 0.9826, test_acc: 91.99%
Epoch 54 - loss: 0.9798, acc: 92.25% / test_loss: 0.9831, test_acc: 91.97%
Epoch 55 - loss: 0.9792, acc: 92.30% / test_loss: 0.9814, test_acc: 92.11%
Epoch 56 - loss: 0.9772, acc: 92.48% / test_loss: 0.9807, test_acc: 92.13%
Epoch 57 - loss: 0.9777, acc: 92.48% / test_loss: 0.9806, test_acc: 92.12%
Epoch 58 - loss: 0.9785, acc: 92.37% / test_loss: 0.9813, test_acc: 92.10%
Epoch 59 - loss: 0.9779, acc: 92.39% / test_loss: 0.9805, test_acc: 92.19%
Epoch 60 - loss: 0.9780, acc: 92.43% / test_loss: 0.9782, test_acc: 92.40%
Epoch 61 - loss: 0.9757, acc: 92.59% / test_loss: 0.9816, test_acc: 92.05%
Epoch 62 - loss: 0.9772, acc: 92.48% / test_loss: 0.9779, test_acc: 92.40%
Epoch 63 - loss: 0.9759, acc: 92.64% / test_loss: 0.9797, test_acc: 92.22%
Epoch 64 - loss: 0.9764, acc: 92.57% / test_loss: 0.9836, test_acc: 91.79%
Epoch 65 - loss: 0.9750, acc: 92.69% / test_loss: 0.9839, test_acc: 91.95%
Epoch 66 - loss: 0.9760, acc: 92.60% / test_loss: 0.9794, test_acc: 92.30%
Epoch 67 - loss: 0.9745, acc: 92.74% / test_loss: 0.9767, test_acc: 92.56%
Epoch 68 - loss: 0.9735, acc: 92.86% / test_loss: 0.9791, test_acc: 92.36%
Epoch 69 - loss: 0.9742, acc: 92.77% / test_loss: 0.9801, test_acc: 92.25%
Epoch 70 - loss: 0.9741, acc: 92.77% / test_loss: 0.9794, test_acc: 92.22%
Epoch 71 - loss: 0.9736, acc: 92.84% / test_loss: 0.9780, test_acc: 92.39%
Epoch 72 - loss: 0.9748, acc: 92.70% / test_loss: 0.9808, test_acc: 92.03%
Epoch 73 - loss: 0.9739, acc: 92.75% / test_loss: 0.9775, test_acc: 92.39%
Epoch 74 - loss: 0.9734, acc: 92.87% / test_loss: 0.9777, test_acc: 92.43%
Epoch 75 - loss: 0.9734, acc: 92.84% / test_loss: 0.9768, test_acc: 92.43%
Epoch 76 - loss: 0.9729, acc: 92.90% / test_loss: 0.9760, test_acc: 92.57%
Epoch 77 - loss: 0.9738, acc: 92.77% / test_loss: 0.9759, test_acc: 92.51%
Epoch 78 - loss: 0.9723, acc: 92.96% / test_loss: 0.9765, test_acc: 92.59%
Epoch 79 - loss: 0.9730, acc: 92.87% / test_loss: 0.9777, test_acc: 92.39%
Epoch 80 - loss: 0.9751, acc: 92.71% / test_loss: 0.9790, test_acc: 92.30%
Epoch 81 - loss: 0.9734, acc: 92.83% / test_loss: 0.9765, test_acc: 92.53%
Epoch 82 - loss: 0.9715, acc: 92.99% / test_loss: 0.9767, test_acc: 92.53%
Epoch 83 - loss: 0.9722, acc: 92.97% / test_loss: 0.9754, test_acc: 92.61%
Epoch 84 - loss: 0.9717, acc: 92.99% / test_loss: 0.9757, test_acc: 92.57%
Epoch 85 - loss: 0.9708, acc: 93.09% / test_loss: 0.9760, test_acc: 92.57%
Epoch 86 - loss: 0.9706, acc: 93.07% / test_loss: 0.9764, test_acc: 92.48%
Epoch 87 - loss: 0.9718, acc: 93.02% / test_loss: 0.9753, test_acc: 92.67%
Epoch 88 - loss: 0.9723, acc: 92.94% / test_loss: 0.9779, test_acc: 92.38%
Epoch 89 - loss: 0.9716, acc: 93.02% / test_loss: 0.9765, test_acc: 92.48%
Epoch 90 - loss: 0.9710, acc: 93.05% / test_loss: 0.9759, test_acc: 92.56%
Epoch 91 - loss: 0.9702, acc: 93.15% / test_loss: 0.9749, test_acc: 92.65%
Epoch 92 - loss: 0.9714, acc: 93.05% / test_loss: 0.9741, test_acc: 92.67%
Epoch 93 - loss: 0.9698, acc: 93.19% / test_loss: 0.9744, test_acc: 92.73%
Epoch 94 - loss: 0.9706, acc: 93.10% / test_loss: 0.9765, test_acc: 92.51%
Epoch 95 - loss: 0.9694, acc: 93.26% / test_loss: 0.9731, test_acc: 92.92%
Epoch 96 - loss: 0.9688, acc: 93.26% / test_loss: 0.9746, test_acc: 92.70%
Epoch 97 - loss: 0.9699, acc: 93.12% / test_loss: 0.9735, test_acc: 92.86%
Epoch 98 - loss: 0.9693, acc: 93.23% / test_loss: 0.9759, test_acc: 92.62%
Epoch 99 - loss: 0.9709, acc: 93.07% / test_loss: 0.9770, test_acc: 92.53%
Epoch 100 - loss: 0.9693, acc: 93.24% / test_loss: 0.9734, test_acc: 92.80%
Epoch 101 - loss: 0.9693, acc: 93.26% / test_loss: 0.9731, test_acc: 92.84%
Epoch 102 - loss: 0.9700, acc: 93.14% / test_loss: 0.9766, test_acc: 92.47%
Epoch 103 - loss: 0.9682, acc: 93.30% / test_loss: 0.9722, test_acc: 92.92%
Epoch 104 - loss: 0.9678, acc: 93.36% / test_loss: 0.9723, test_acc: 92.96%
Epoch 105 - loss: 0.9679, acc: 93.36% / test_loss: 0.9723, test_acc: 92.93%
Epoch 106 - loss: 0.9678, acc: 93.40% / test_loss: 0.9741, test_acc: 92.77%
Epoch 107 - loss: 0.9686, acc: 93.30% / test_loss: 0.9748, test_acc: 92.65%
Epoch 108 - loss: 0.9690, acc: 93.24% / test_loss: 0.9722, test_acc: 92.92%
Epoch 109 - loss: 0.9683, acc: 93.32% / test_loss: 0.9732, test_acc: 92.83%
Epoch 110 - loss: 0.9687, acc: 93.27% / test_loss: 0.9737, test_acc: 92.71%
Epoch 111 - loss: 0.9678, acc: 93.36% / test_loss: 0.9712, test_acc: 92.97%
Epoch 112 - loss: 0.9674, acc: 93.37% / test_loss: 0.9710, test_acc: 92.99%
Epoch 113 - loss: 0.9675, acc: 93.36% / test_loss: 0.9731, test_acc: 92.85%
Epoch 114 - loss: 0.9682, acc: 93.35% / test_loss: 0.9709, test_acc: 93.05%
Epoch 115 - loss: 0.9677, acc: 93.36% / test_loss: 0.9710, test_acc: 93.03%
Epoch 116 - loss: 0.9689, acc: 93.27% / test_loss: 0.9731, test_acc: 92.85%
Epoch 117 - loss: 0.9698, acc: 93.18% / test_loss: 0.9713, test_acc: 92.98%
Epoch 118 - loss: 0.9669, acc: 93.44% / test_loss: 0.9707, test_acc: 93.08%
Epoch 119 - loss: 0.9672, acc: 93.40% / test_loss: 0.9728, test_acc: 92.86%
Epoch 120 - loss: 0.9676, acc: 93.37% / test_loss: 0.9797, test_acc: 92.22%
Epoch 121 - loss: 0.9676, acc: 93.38% / test_loss: 0.9712, test_acc: 93.02%
Epoch 122 - loss: 0.9672, acc: 93.39% / test_loss: 0.9716, test_acc: 92.98%
Epoch 123 - loss: 0.9671, acc: 93.44% / test_loss: 0.9696, test_acc: 93.18%
Epoch 124 - loss: 0.9658, acc: 93.55% / test_loss: 0.9699, test_acc: 93.11%
Epoch 125 - loss: 0.9673, acc: 93.40% / test_loss: 0.9754, test_acc: 92.55%
Epoch 126 - loss: 0.9667, acc: 93.44% / test_loss: 0.9701, test_acc: 93.12%
Epoch 127 - loss: 0.9701, acc: 93.14% / test_loss: 0.9718, test_acc: 92.94%
Epoch 128 - loss: 0.9675, acc: 93.39% / test_loss: 0.9715, test_acc: 92.96%
Epoch 129 - loss: 0.9660, acc: 93.52% / test_loss: 0.9708, test_acc: 93.02%
Epoch 130 - loss: 0.9658, acc: 93.54% / test_loss: 0.9700, test_acc: 93.14%
Epoch 131 - loss: 0.9689, acc: 93.24% / test_loss: 0.9750, test_acc: 92.63%
Epoch 132 - loss: 0.9671, acc: 93.42% / test_loss: 0.9713, test_acc: 93.00%
Epoch 133 - loss: 0.9679, acc: 93.36% / test_loss: 0.9706, test_acc: 93.05%
Epoch 134 - loss: 0.9656, acc: 93.54% / test_loss: 0.9719, test_acc: 92.92%
Epoch 135 - loss: 0.9682, acc: 93.29% / test_loss: 0.9724, test_acc: 92.87%
Epoch 136 - loss: 0.9672, acc: 93.41% / test_loss: 0.9703, test_acc: 93.11%
Epoch 137 - loss: 0.9671, acc: 93.42% / test_loss: 0.9709, test_acc: 92.99%
Epoch 138 - loss: 0.9667, acc: 93.44% / test_loss: 0.9714, test_acc: 92.98%
Epoch 139 - loss: 0.9669, acc: 93.43% / test_loss: 0.9706, test_acc: 93.02%
Epoch 140 - loss: 0.9662, acc: 93.49% / test_loss: 0.9705, test_acc: 93.09%
Epoch 141 - loss: 0.9666, acc: 93.47% / test_loss: 0.9720, test_acc: 92.93%
Epoch 142 - loss: 0.9671, acc: 93.39% / test_loss: 0.9712, test_acc: 93.02%
Epoch 143 - loss: 0.9658, acc: 93.53% / test_loss: 0.9697, test_acc: 93.15%
Epoch 144 - loss: 0.9667, acc: 93.45% / test_loss: 0.9714, test_acc: 93.01%
Epoch 145 - loss: 0.9691, acc: 93.23% / test_loss: 0.9741, test_acc: 92.74%
Epoch 146 - loss: 0.9667, acc: 93.44% / test_loss: 0.9705, test_acc: 93.07%
Epoch 147 - loss: 0.9659, acc: 93.51% / test_loss: 0.9708, test_acc: 93.08%
Epoch 148 - loss: 0.9663, acc: 93.49% / test_loss: 0.9718, test_acc: 92.93%
Epoch 149 - loss: 0.9668, acc: 93.42% / test_loss: 0.9709, test_acc: 93.01%
Epoch 150 - loss: 0.9663, acc: 93.48% / test_loss: 0.9704, test_acc: 93.08%
Epoch 151 - loss: 0.9663, acc: 93.51% / test_loss: 0.9726, test_acc: 92.85%
Epoch 152 - loss: 0.9676, acc: 93.37% / test_loss: 0.9710, test_acc: 92.99%
Epoch 153 - loss: 0.9671, acc: 93.42% / test_loss: 0.9701, test_acc: 93.11%
Epoch 154 - loss: 0.9647, acc: 93.62% / test_loss: 0.9704, test_acc: 93.05%
Epoch 155 - loss: 0.9654, acc: 93.56% / test_loss: 0.9746, test_acc: 92.66%
Epoch 156 - loss: 0.9656, acc: 93.56% / test_loss: 0.9701, test_acc: 93.11%
Epoch 157 - loss: 0.9650, acc: 93.64% / test_loss: 0.9717, test_acc: 92.90%
Epoch 158 - loss: 0.9655, acc: 93.57% / test_loss: 0.9725, test_acc: 92.83%
Epoch 159 - loss: 0.9667, acc: 93.45% / test_loss: 0.9701, test_acc: 93.07%
Epoch 160 - loss: 0.9659, acc: 93.54% / test_loss: 0.9717, test_acc: 92.99%
Epoch 161 - loss: 0.9663, acc: 93.50% / test_loss: 0.9704, test_acc: 93.11%
Epoch 162 - loss: 0.9649, acc: 93.61% / test_loss: 0.9691, test_acc: 93.20%
Epoch 163 - loss: 0.9650, acc: 93.61% / test_loss: 0.9708, test_acc: 93.04%
Epoch 164 - loss: 0.9641, acc: 93.72% / test_loss: 0.9695, test_acc: 93.16%
Epoch 165 - loss: 0.9650, acc: 93.59% / test_loss: 0.9700, test_acc: 93.11%
Epoch 166 - loss: 0.9652, acc: 93.60% / test_loss: 0.9708, test_acc: 93.01%
Epoch 167 - loss: 0.9653, acc: 93.60% / test_loss: 0.9699, test_acc: 93.09%
Epoch 168 - loss: 0.9646, acc: 93.63% / test_loss: 0.9692, test_acc: 93.20%
Epoch 169 - loss: 0.9643, acc: 93.67% / test_loss: 0.9693, test_acc: 93.24%
Epoch 170 - loss: 0.9649, acc: 93.61% / test_loss: 0.9699, test_acc: 93.16%
Epoch 171 - loss: 0.9644, acc: 93.67% / test_loss: 0.9688, test_acc: 93.22%
Epoch 172 - loss: 0.9637, acc: 93.73% / test_loss: 0.9692, test_acc: 93.17%
Epoch 173 - loss: 0.9650, acc: 93.58% / test_loss: 0.9702, test_acc: 93.08%
Epoch 174 - loss: 0.9649, acc: 93.62% / test_loss: 0.9694, test_acc: 93.14%
Epoch 175 - loss: 0.9652, acc: 93.59% / test_loss: 0.9697, test_acc: 93.13%
Epoch 176 - loss: 0.9650, acc: 93.61% / test_loss: 0.9691, test_acc: 93.19%
Epoch 177 - loss: 0.9648, acc: 93.63% / test_loss: 0.9698, test_acc: 93.13%
Epoch 178 - loss: 0.9639, acc: 93.72% / test_loss: 0.9696, test_acc: 93.15%
Epoch 179 - loss: 0.9641, acc: 93.67% / test_loss: 0.9688, test_acc: 93.19%
Epoch 180 - loss: 0.9625, acc: 93.84% / test_loss: 0.9674, test_acc: 93.42%
Epoch 181 - loss: 0.9642, acc: 93.69% / test_loss: 0.9674, test_acc: 93.41%
Epoch 182 - loss: 0.9515, acc: 95.06% / test_loss: 0.9261, test_acc: 97.90%
Epoch 183 - loss: 0.9183, acc: 98.66% / test_loss: 0.9246, test_acc: 98.09%
Epoch 184 - loss: 0.9181, acc: 98.72% / test_loss: 0.9243, test_acc: 98.07%
Epoch 185 - loss: 0.9169, acc: 98.78% / test_loss: 0.9238, test_acc: 98.13%
Epoch 186 - loss: 0.9178, acc: 98.71% / test_loss: 0.9242, test_acc: 98.08%
Epoch 187 - loss: 0.9168, acc: 98.80% / test_loss: 0.9261, test_acc: 97.91%
Epoch 188 - loss: 0.9167, acc: 98.81% / test_loss: 0.9246, test_acc: 97.98%
Epoch 189 - loss: 0.9168, acc: 98.81% / test_loss: 0.9241, test_acc: 98.12%
Epoch 190 - loss: 0.9159, acc: 98.92% / test_loss: 0.9234, test_acc: 98.14%
Epoch 191 - loss: 0.9157, acc: 98.93% / test_loss: 0.9237, test_acc: 98.11%
Epoch 192 - loss: 0.9166, acc: 98.82% / test_loss: 0.9230, test_acc: 98.19%
Epoch 193 - loss: 0.9160, acc: 98.90% / test_loss: 0.9254, test_acc: 97.96%
Epoch 194 - loss: 0.9162, acc: 98.86% / test_loss: 0.9260, test_acc: 97.88%
Epoch 195 - loss: 0.9182, acc: 98.69% / test_loss: 0.9241, test_acc: 98.10%
Epoch 196 - loss: 0.9171, acc: 98.81% / test_loss: 0.9238, test_acc: 98.11%
Epoch 197 - loss: 0.9154, acc: 98.96% / test_loss: 0.9239, test_acc: 98.06%
Epoch 198 - loss: 0.9158, acc: 98.90% / test_loss: 0.9238, test_acc: 98.10%
Epoch 199 - loss: 0.9149, acc: 98.99% / test_loss: 0.9241, test_acc: 98.07%
Epoch 200 - loss: 0.9153, acc: 98.95% / test_loss: 0.9243, test_acc: 98.07%
Epoch 201 - loss: 0.9150, acc: 98.97% / test_loss: 0.9238, test_acc: 98.08%
Epoch 202 - loss: 0.9169, acc: 98.79% / test_loss: 0.9233, test_acc: 98.17%
Epoch 203 - loss: 0.9148, acc: 99.01% / test_loss: 0.9253, test_acc: 97.95%
Epoch 204 - loss: 0.9154, acc: 98.97% / test_loss: 0.9276, test_acc: 97.75%
Epoch 205 - loss: 0.9160, acc: 98.92% / test_loss: 0.9238, test_acc: 98.09%
Epoch 206 - loss: 0.9153, acc: 98.97% / test_loss: 0.9236, test_acc: 98.12%
Epoch 207 - loss: 0.9151, acc: 98.98% / test_loss: 0.9223, test_acc: 98.27%
Epoch 208 - loss: 0.9143, acc: 99.07% / test_loss: 0.9226, test_acc: 98.23%
Epoch 209 - loss: 0.9144, acc: 99.04% / test_loss: 0.9227, test_acc: 98.19%
Epoch 210 - loss: 0.9141, acc: 99.07% / test_loss: 0.9227, test_acc: 98.21%
Epoch 211 - loss: 0.9153, acc: 98.97% / test_loss: 0.9256, test_acc: 97.94%
Epoch 212 - loss: 0.9169, acc: 98.80% / test_loss: 0.9235, test_acc: 98.14%
Epoch 213 - loss: 0.9150, acc: 99.00% / test_loss: 0.9233, test_acc: 98.15%
Epoch 214 - loss: 0.9147, acc: 99.02% / test_loss: 0.9218, test_acc: 98.31%
Epoch 215 - loss: 0.9157, acc: 98.94% / test_loss: 0.9242, test_acc: 98.07%
Epoch 216 - loss: 0.9160, acc: 98.89% / test_loss: 0.9249, test_acc: 98.01%
Epoch 217 - loss: 0.9164, acc: 98.85% / test_loss: 0.9245, test_acc: 98.02%
Epoch 218 - loss: 0.9156, acc: 98.96% / test_loss: 0.9240, test_acc: 98.06%
Epoch 219 - loss: 0.9149, acc: 98.99% / test_loss: 0.9224, test_acc: 98.23%
Epoch 220 - loss: 0.9144, acc: 99.06% / test_loss: 0.9265, test_acc: 97.83%
Epoch 221 - loss: 0.9143, acc: 99.05% / test_loss: 0.9248, test_acc: 97.98%
Epoch 222 - loss: 0.9144, acc: 99.05% / test_loss: 0.9222, test_acc: 98.23%
Epoch 223 - loss: 0.9154, acc: 98.99% / test_loss: 0.9327, test_acc: 97.21%
Epoch 224 - loss: 0.9159, acc: 98.89% / test_loss: 0.9247, test_acc: 98.01%
Epoch 225 - loss: 0.9143, acc: 99.05% / test_loss: 0.9233, test_acc: 98.14%
Epoch 226 - loss: 0.9135, acc: 99.14% / test_loss: 0.9223, test_acc: 98.23%
Epoch 227 - loss: 0.9144, acc: 99.03% / test_loss: 0.9243, test_acc: 98.04%
Epoch 228 - loss: 0.9143, acc: 99.06% / test_loss: 0.9217, test_acc: 98.32%
Epoch 229 - loss: 0.9140, acc: 99.10% / test_loss: 0.9232, test_acc: 98.17%
Epoch 230 - loss: 0.9152, acc: 98.94% / test_loss: 0.9243, test_acc: 98.10%
Epoch 231 - loss: 0.9142, acc: 99.06% / test_loss: 0.9242, test_acc: 98.01%
Epoch 232 - loss: 0.9152, acc: 98.97% / test_loss: 0.9224, test_acc: 98.29%
Epoch 233 - loss: 0.9142, acc: 99.05% / test_loss: 0.9236, test_acc: 98.13%
Epoch 234 - loss: 0.9138, acc: 99.11% / test_loss: 0.9238, test_acc: 98.09%
Epoch 235 - loss: 0.9155, acc: 98.94% / test_loss: 0.9232, test_acc: 98.12%
Epoch 236 - loss: 0.9149, acc: 98.98% / test_loss: 0.9254, test_acc: 97.91%
Epoch 237 - loss: 0.9144, acc: 99.06% / test_loss: 0.9238, test_acc: 98.12%
Epoch 238 - loss: 0.9133, acc: 99.16% / test_loss: 0.9231, test_acc: 98.18%
Epoch 239 - loss: 0.9139, acc: 99.10% / test_loss: 0.9230, test_acc: 98.17%
Epoch 240 - loss: 0.9136, acc: 99.15% / test_loss: 0.9235, test_acc: 98.10%
Epoch 241 - loss: 0.9135, acc: 99.15% / test_loss: 0.9223, test_acc: 98.24%
Epoch 242 - loss: 0.9137, acc: 99.10% / test_loss: 0.9238, test_acc: 98.11%
Epoch 243 - loss: 0.9137, acc: 99.12% / test_loss: 0.9216, test_acc: 98.33%
Epoch 244 - loss: 0.9130, acc: 99.19% / test_loss: 0.9214, test_acc: 98.35%
Epoch 245 - loss: 0.9129, acc: 99.20% / test_loss: 0.9229, test_acc: 98.20%
Epoch 246 - loss: 0.9152, acc: 98.97% / test_loss: 0.9219, test_acc: 98.30%
Epoch 247 - loss: 0.9144, acc: 99.03% / test_loss: 0.9228, test_acc: 98.18%
Epoch 248 - loss: 0.9137, acc: 99.11% / test_loss: 0.9238, test_acc: 98.13%
Epoch 249 - loss: 0.9138, acc: 99.12% / test_loss: 0.9232, test_acc: 98.14%
Epoch 250 - loss: 0.9134, acc: 99.15% / test_loss: 0.9249, test_acc: 97.99%
Epoch 251 - loss: 0.9143, acc: 99.06% / test_loss: 0.9223, test_acc: 98.24%
Epoch 252 - loss: 0.9142, acc: 99.06% / test_loss: 0.9221, test_acc: 98.28%
Epoch 253 - loss: 0.9150, acc: 99.00% / test_loss: 0.9235, test_acc: 98.12%
Epoch 254 - loss: 0.9144, acc: 99.02% / test_loss: 0.9223, test_acc: 98.26%
Epoch 255 - loss: 0.9133, acc: 99.15% / test_loss: 0.9216, test_acc: 98.33%
Epoch 256 - loss: 0.9140, acc: 99.08% / test_loss: 0.9242, test_acc: 98.08%
Epoch 257 - loss: 0.9140, acc: 99.09% / test_loss: 0.9236, test_acc: 98.13%
Epoch 258 - loss: 0.9139, acc: 99.09% / test_loss: 0.9229, test_acc: 98.20%
Epoch 259 - loss: 0.9139, acc: 99.09% / test_loss: 0.9216, test_acc: 98.34%
Epoch 260 - loss: 0.9130, acc: 99.20% / test_loss: 0.9229, test_acc: 98.19%
Epoch 261 - loss: 0.9130, acc: 99.18% / test_loss: 0.9224, test_acc: 98.24%
Epoch 262 - loss: 0.9134, acc: 99.14% / test_loss: 0.9216, test_acc: 98.33%
Epoch 263 - loss: 0.9132, acc: 99.17% / test_loss: 0.9218, test_acc: 98.28%
Epoch 264 - loss: 0.9153, acc: 98.97% / test_loss: 0.9232, test_acc: 98.14%
Epoch 265 - loss: 0.9142, acc: 99.06% / test_loss: 0.9226, test_acc: 98.18%
Epoch 266 - loss: 0.9131, acc: 99.20% / test_loss: 0.9221, test_acc: 98.24%
Epoch 267 - loss: 0.9132, acc: 99.18% / test_loss: 0.9216, test_acc: 98.32%
Epoch 268 - loss: 0.9136, acc: 99.12% / test_loss: 0.9216, test_acc: 98.32%
Epoch 269 - loss: 0.9142, acc: 99.04% / test_loss: 0.9237, test_acc: 98.11%
Epoch 270 - loss: 0.9144, acc: 99.04% / test_loss: 0.9260, test_acc: 97.85%
Epoch 271 - loss: 0.9134, acc: 99.15% / test_loss: 0.9232, test_acc: 98.15%
Epoch 272 - loss: 0.9134, acc: 99.13% / test_loss: 0.9225, test_acc: 98.19%
Epoch 273 - loss: 0.9137, acc: 99.09% / test_loss: 0.9283, test_acc: 97.61%
Epoch 274 - loss: 0.9141, acc: 99.07% / test_loss: 0.9223, test_acc: 98.26%
Epoch 275 - loss: 0.9135, acc: 99.12% / test_loss: 0.9228, test_acc: 98.17%
Epoch 276 - loss: 0.9134, acc: 99.14% / test_loss: 0.9234, test_acc: 98.16%
Epoch 277 - loss: 0.9139, acc: 99.11% / test_loss: 0.9226, test_acc: 98.24%
Epoch 278 - loss: 0.9137, acc: 99.12% / test_loss: 0.9259, test_acc: 97.91%
Epoch 279 - loss: 0.9130, acc: 99.21% / test_loss: 0.9233, test_acc: 98.15%
Epoch 280 - loss: 0.9143, acc: 99.05% / test_loss: 0.9228, test_acc: 98.19%
Epoch 281 - loss: 0.9135, acc: 99.14% / test_loss: 0.9254, test_acc: 97.97%
Epoch 282 - loss: 0.9144, acc: 99.04% / test_loss: 0.9222, test_acc: 98.28%
Epoch 283 - loss: 0.9143, acc: 99.05% / test_loss: 0.9229, test_acc: 98.16%
Epoch 284 - loss: 0.9136, acc: 99.15% / test_loss: 0.9235, test_acc: 98.13%
Epoch 285 - loss: 0.9142, acc: 99.07% / test_loss: 0.9311, test_acc: 97.36%
Epoch 286 - loss: 0.9133, acc: 99.15% / test_loss: 0.9225, test_acc: 98.21%
Epoch 287 - loss: 0.9130, acc: 99.18% / test_loss: 0.9227, test_acc: 98.21%
Epoch 288 - loss: 0.9135, acc: 99.15% / test_loss: 0.9228, test_acc: 98.17%
Epoch 289 - loss: 0.9143, acc: 99.05% / test_loss: 0.9222, test_acc: 98.26%
Epoch 290 - loss: 0.9137, acc: 99.12% / test_loss: 0.9213, test_acc: 98.32%
Epoch 291 - loss: 0.9124, acc: 99.24% / test_loss: 0.9219, test_acc: 98.28%
Epoch 292 - loss: 0.9127, acc: 99.21% / test_loss: 0.9262, test_acc: 97.83%
Epoch 293 - loss: 0.9132, acc: 99.18% / test_loss: 0.9241, test_acc: 98.07%
Epoch 294 - loss: 0.9145, acc: 99.04% / test_loss: 0.9224, test_acc: 98.20%
Epoch 295 - loss: 0.9141, acc: 99.06% / test_loss: 0.9225, test_acc: 98.21%
Epoch 296 - loss: 0.9130, acc: 99.18% / test_loss: 0.9209, test_acc: 98.41%
Epoch 297 - loss: 0.9128, acc: 99.19% / test_loss: 0.9216, test_acc: 98.32%
Epoch 298 - loss: 0.9121, acc: 99.28% / test_loss: 0.9236, test_acc: 98.09%
Epoch 299 - loss: 0.9142, acc: 99.09% / test_loss: 0.9226, test_acc: 98.22%
Epoch 300 - loss: 0.9132, acc: 99.18% / test_loss: 0.9223, test_acc: 98.26%
Epoch 301 - loss: 0.9139, acc: 99.09% / test_loss: 0.9265, test_acc: 97.80%
Epoch 302 - loss: 0.9137, acc: 99.12% / test_loss: 0.9231, test_acc: 98.17%
Epoch 303 - loss: 0.9128, acc: 99.21% / test_loss: 0.9231, test_acc: 98.15%
Epoch 304 - loss: 0.9138, acc: 99.09% / test_loss: 0.9278, test_acc: 97.72%
Epoch 305 - loss: 0.9141, acc: 99.09% / test_loss: 0.9236, test_acc: 98.12%
Epoch 306 - loss: 0.9133, acc: 99.15% / test_loss: 0.9232, test_acc: 98.17%
Epoch 307 - loss: 0.9133, acc: 99.15% / test_loss: 0.9223, test_acc: 98.24%
Epoch 308 - loss: 0.9137, acc: 99.12% / test_loss: 0.9219, test_acc: 98.29%
Epoch 309 - loss: 0.9141, acc: 99.07% / test_loss: 0.9216, test_acc: 98.32%
Epoch 310 - loss: 0.9131, acc: 99.18% / test_loss: 0.9220, test_acc: 98.26%
Epoch 311 - loss: 0.9136, acc: 99.12% / test_loss: 0.9225, test_acc: 98.26%
Epoch 312 - loss: 0.9132, acc: 99.17% / test_loss: 0.9211, test_acc: 98.38%
Epoch 313 - loss: 0.9132, acc: 99.15% / test_loss: 0.9207, test_acc: 98.40%
Epoch 314 - loss: 0.9130, acc: 99.18% / test_loss: 0.9215, test_acc: 98.31%
Epoch 315 - loss: 0.9141, acc: 99.05% / test_loss: 0.9243, test_acc: 98.05%
Epoch 316 - loss: 0.9136, acc: 99.14% / test_loss: 0.9227, test_acc: 98.17%
Epoch 317 - loss: 0.9126, acc: 99.23% / test_loss: 0.9214, test_acc: 98.31%
Epoch 318 - loss: 0.9131, acc: 99.18% / test_loss: 0.9244, test_acc: 98.03%
Epoch 319 - loss: 0.9127, acc: 99.21% / test_loss: 0.9224, test_acc: 98.20%
Epoch 320 - loss: 0.9127, acc: 99.21% / test_loss: 0.9230, test_acc: 98.20%
Epoch 321 - loss: 0.9154, acc: 98.94% / test_loss: 0.9253, test_acc: 97.94%
Epoch 322 - loss: 0.9141, acc: 99.06% / test_loss: 0.9207, test_acc: 98.42%
Epoch 323 - loss: 0.9125, acc: 99.24% / test_loss: 0.9206, test_acc: 98.39%
Epoch 324 - loss: 0.9128, acc: 99.19% / test_loss: 0.9217, test_acc: 98.31%
Epoch 325 - loss: 0.9141, acc: 99.07% / test_loss: 0.9228, test_acc: 98.18%
Epoch 326 - loss: 0.9132, acc: 99.18% / test_loss: 0.9218, test_acc: 98.31%
Epoch 327 - loss: 0.9132, acc: 99.16% / test_loss: 0.9233, test_acc: 98.13%
Epoch 328 - loss: 0.9146, acc: 99.01% / test_loss: 0.9220, test_acc: 98.29%
Epoch 329 - loss: 0.9137, acc: 99.11% / test_loss: 0.9235, test_acc: 98.13%
Epoch 330 - loss: 0.9128, acc: 99.20% / test_loss: 0.9211, test_acc: 98.34%
Epoch 331 - loss: 0.9123, acc: 99.25% / test_loss: 0.9216, test_acc: 98.31%
Epoch 332 - loss: 0.9129, acc: 99.20% / test_loss: 0.9221, test_acc: 98.26%
Epoch 333 - loss: 0.9132, acc: 99.15% / test_loss: 0.9223, test_acc: 98.23%
Epoch 334 - loss: 0.9125, acc: 99.25% / test_loss: 0.9253, test_acc: 97.95%
Epoch 335 - loss: 0.9127, acc: 99.22% / test_loss: 0.9221, test_acc: 98.25%
Epoch 336 - loss: 0.9125, acc: 99.24% / test_loss: 0.9217, test_acc: 98.30%
Epoch 337 - loss: 0.9142, acc: 99.08% / test_loss: 0.9226, test_acc: 98.20%
Epoch 338 - loss: 0.9142, acc: 99.08% / test_loss: 0.9222, test_acc: 98.26%
Epoch 339 - loss: 0.9135, acc: 99.15% / test_loss: 0.9223, test_acc: 98.23%
Epoch 340 - loss: 0.9136, acc: 99.10% / test_loss: 0.9215, test_acc: 98.34%
Epoch 341 - loss: 0.9136, acc: 99.12% / test_loss: 0.9221, test_acc: 98.26%
Epoch 342 - loss: 0.9130, acc: 99.20% / test_loss: 0.9238, test_acc: 98.07%
Epoch 343 - loss: 0.9124, acc: 99.25% / test_loss: 0.9219, test_acc: 98.28%
Epoch 344 - loss: 0.9125, acc: 99.23% / test_loss: 0.9216, test_acc: 98.34%
Epoch 345 - loss: 0.9130, acc: 99.18% / test_loss: 0.9254, test_acc: 97.88%
Epoch 346 - loss: 0.9147, acc: 99.01% / test_loss: 0.9253, test_acc: 97.91%
Epoch 347 - loss: 0.9133, acc: 99.15% / test_loss: 0.9228, test_acc: 98.18%
Epoch 348 - loss: 0.9133, acc: 99.15% / test_loss: 0.9219, test_acc: 98.27%
Epoch 349 - loss: 0.9142, acc: 99.08% / test_loss: 0.9268, test_acc: 97.77%
Epoch 350 - loss: 0.9150, acc: 98.97% / test_loss: 0.9220, test_acc: 98.28%
Epoch 351 - loss: 0.9129, acc: 99.20% / test_loss: 0.9235, test_acc: 98.12%
Epoch 352 - loss: 0.9133, acc: 99.14% / test_loss: 0.9215, test_acc: 98.31%
Epoch 353 - loss: 0.9122, acc: 99.27% / test_loss: 0.9227, test_acc: 98.17%
Epoch 354 - loss: 0.9132, acc: 99.17% / test_loss: 0.9229, test_acc: 98.20%
Epoch 355 - loss: 0.9126, acc: 99.22% / test_loss: 0.9220, test_acc: 98.27%
Epoch 356 - loss: 0.9139, acc: 99.09% / test_loss: 0.9222, test_acc: 98.25%
Epoch 357 - loss: 0.9123, acc: 99.24% / test_loss: 0.9243, test_acc: 98.04%
Epoch 358 - loss: 0.9127, acc: 99.22% / test_loss: 0.9217, test_acc: 98.31%
Epoch 359 - loss: 0.9120, acc: 99.28% / test_loss: 0.9221, test_acc: 98.26%
Epoch 360 - loss: 0.9122, acc: 99.26% / test_loss: 0.9229, test_acc: 98.17%
Epoch 361 - loss: 0.9140, acc: 99.09% / test_loss: 0.9224, test_acc: 98.21%
Epoch 362 - loss: 0.9134, acc: 99.14% / test_loss: 0.9224, test_acc: 98.23%
Epoch 363 - loss: 0.9138, acc: 99.12% / test_loss: 0.9225, test_acc: 98.22%
Epoch 364 - loss: 0.9141, acc: 99.07% / test_loss: 0.9230, test_acc: 98.14%
Epoch 365 - loss: 0.9141, acc: 99.04% / test_loss: 0.9224, test_acc: 98.24%
Epoch 366 - loss: 0.9127, acc: 99.21% / test_loss: 0.9237, test_acc: 98.13%
Epoch 367 - loss: 0.9141, acc: 99.07% / test_loss: 0.9226, test_acc: 98.23%
Epoch 368 - loss: 0.9128, acc: 99.20% / test_loss: 0.9218, test_acc: 98.29%
Epoch 369 - loss: 0.9131, acc: 99.16% / test_loss: 0.9219, test_acc: 98.29%
Epoch 370 - loss: 0.9122, acc: 99.27% / test_loss: 0.9211, test_acc: 98.37%
Epoch 371 - loss: 0.9119, acc: 99.29% / test_loss: 0.9213, test_acc: 98.36%
Epoch 372 - loss: 0.9123, acc: 99.26% / test_loss: 0.9225, test_acc: 98.28%
Epoch 373 - loss: 0.9130, acc: 99.21% / test_loss: 0.9227, test_acc: 98.20%
Epoch 374 - loss: 0.9126, acc: 99.22% / test_loss: 0.9207, test_acc: 98.42%
Epoch 375 - loss: 0.9130, acc: 99.17% / test_loss: 0.9243, test_acc: 98.07%
Epoch 376 - loss: 0.9142, acc: 99.06% / test_loss: 0.9217, test_acc: 98.31%
Epoch 377 - loss: 0.9127, acc: 99.22% / test_loss: 0.9246, test_acc: 97.98%
Epoch 378 - loss: 0.9138, acc: 99.11% / test_loss: 0.9221, test_acc: 98.25%
Epoch 379 - loss: 0.9131, acc: 99.18% / test_loss: 0.9215, test_acc: 98.32%
Epoch 380 - loss: 0.9124, acc: 99.25% / test_loss: 0.9205, test_acc: 98.41%
Epoch 381 - loss: 0.9128, acc: 99.19% / test_loss: 0.9347, test_acc: 97.02%
Epoch 382 - loss: 0.9138, acc: 99.12% / test_loss: 0.9220, test_acc: 98.29%
Epoch 383 - loss: 0.9143, acc: 99.03% / test_loss: 0.9212, test_acc: 98.35%
Epoch 384 - loss: 0.9132, acc: 99.15% / test_loss: 0.9213, test_acc: 98.36%
Epoch 385 - loss: 0.9129, acc: 99.19% / test_loss: 0.9205, test_acc: 98.41%
Epoch 386 - loss: 0.9129, acc: 99.20% / test_loss: 0.9216, test_acc: 98.32%
Epoch 387 - loss: 0.9131, acc: 99.18% / test_loss: 0.9230, test_acc: 98.19%
Epoch 388 - loss: 0.9131, acc: 99.18% / test_loss: 0.9210, test_acc: 98.38%
Epoch 389 - loss: 0.9127, acc: 99.20% / test_loss: 0.9226, test_acc: 98.19%
Epoch 390 - loss: 0.9133, acc: 99.15% / test_loss: 0.9214, test_acc: 98.33%
Epoch 391 - loss: 0.9120, acc: 99.28% / test_loss: 0.9209, test_acc: 98.36%
Epoch 392 - loss: 0.9122, acc: 99.26% / test_loss: 0.9209, test_acc: 98.37%
Epoch 393 - loss: 0.9122, acc: 99.26% / test_loss: 0.9211, test_acc: 98.38%
Epoch 394 - loss: 0.9129, acc: 99.18% / test_loss: 0.9237, test_acc: 98.14%
Epoch 395 - loss: 0.9143, acc: 99.07% / test_loss: 0.9238, test_acc: 98.10%
Epoch 396 - loss: 0.9130, acc: 99.18% / test_loss: 0.9206, test_acc: 98.42%
Epoch 397 - loss: 0.9124, acc: 99.24% / test_loss: 0.9214, test_acc: 98.35%
Epoch 398 - loss: 0.9123, acc: 99.24% / test_loss: 0.9207, test_acc: 98.41%
Epoch 399 - loss: 0.9121, acc: 99.27% / test_loss: 0.9223, test_acc: 98.26%
Epoch 400 - loss: 0.9136, acc: 99.12% / test_loss: 0.9225, test_acc: 98.22%
Best test accuracy 98.42% in epoch 322.
----------------------------------------------------------------------------------------------------
Run 9
Epoch 1 - loss: 1.3374, acc: 56.95% / test_loss: 1.1728, test_acc: 74.81%
Epoch 2 - loss: 1.1214, acc: 79.15% / test_loss: 1.0713, test_acc: 83.68%
Epoch 3 - loss: 1.0571, acc: 85.15% / test_loss: 1.0351, test_acc: 87.19%
Epoch 4 - loss: 1.0428, acc: 86.43% / test_loss: 1.0331, test_acc: 87.35%
Epoch 5 - loss: 1.0358, acc: 87.02% / test_loss: 1.0225, test_acc: 88.34%
Epoch 6 - loss: 1.0304, acc: 87.56% / test_loss: 1.0307, test_acc: 87.49%
Epoch 7 - loss: 1.0272, acc: 87.82% / test_loss: 1.0166, test_acc: 88.85%
Epoch 8 - loss: 1.0262, acc: 87.88% / test_loss: 1.0167, test_acc: 88.85%
Epoch 9 - loss: 1.0239, acc: 88.10% / test_loss: 1.0226, test_acc: 88.28%
Epoch 10 - loss: 1.0209, acc: 88.40% / test_loss: 1.0144, test_acc: 89.05%
Epoch 11 - loss: 1.0196, acc: 88.52% / test_loss: 1.0123, test_acc: 89.25%
Epoch 12 - loss: 1.0183, acc: 88.64% / test_loss: 1.0125, test_acc: 89.23%
Epoch 13 - loss: 1.0162, acc: 88.84% / test_loss: 1.0084, test_acc: 89.54%
Epoch 14 - loss: 1.0140, acc: 88.99% / test_loss: 1.0063, test_acc: 89.73%
Epoch 15 - loss: 1.0117, acc: 89.24% / test_loss: 1.0060, test_acc: 89.77%
Epoch 16 - loss: 1.0129, acc: 89.08% / test_loss: 1.0076, test_acc: 89.78%
Epoch 17 - loss: 1.0115, acc: 89.33% / test_loss: 1.0081, test_acc: 89.56%
Epoch 18 - loss: 1.0100, acc: 89.41% / test_loss: 1.0045, test_acc: 89.91%
Epoch 19 - loss: 1.0092, acc: 89.51% / test_loss: 1.0040, test_acc: 89.91%
Epoch 20 - loss: 1.0076, acc: 89.58% / test_loss: 1.0028, test_acc: 90.05%
Epoch 21 - loss: 1.0097, acc: 89.37% / test_loss: 1.0126, test_acc: 89.28%
Epoch 22 - loss: 1.0103, acc: 89.34% / test_loss: 1.0037, test_acc: 89.97%
Epoch 23 - loss: 1.0070, acc: 89.66% / test_loss: 1.0040, test_acc: 89.92%
Epoch 24 - loss: 1.0073, acc: 89.58% / test_loss: 1.0016, test_acc: 90.18%
Epoch 25 - loss: 1.0057, acc: 89.76% / test_loss: 1.0014, test_acc: 90.15%
Epoch 26 - loss: 1.0068, acc: 89.59% / test_loss: 1.0055, test_acc: 89.78%
Epoch 27 - loss: 1.0059, acc: 89.73% / test_loss: 1.0067, test_acc: 89.60%
Epoch 28 - loss: 1.0058, acc: 89.73% / test_loss: 1.0053, test_acc: 89.81%
Epoch 29 - loss: 1.0065, acc: 89.62% / test_loss: 1.0021, test_acc: 90.03%
Epoch 30 - loss: 1.0051, acc: 89.78% / test_loss: 1.0011, test_acc: 90.27%
Epoch 31 - loss: 1.0055, acc: 89.69% / test_loss: 1.0044, test_acc: 89.89%
Epoch 32 - loss: 1.0052, acc: 89.79% / test_loss: 1.0006, test_acc: 90.20%
Epoch 33 - loss: 1.0047, acc: 89.83% / test_loss: 1.0014, test_acc: 90.10%
Epoch 34 - loss: 1.0036, acc: 89.91% / test_loss: 0.9998, test_acc: 90.27%
Epoch 35 - loss: 1.0034, acc: 89.94% / test_loss: 1.0009, test_acc: 90.20%
Epoch 36 - loss: 1.0043, acc: 89.79% / test_loss: 1.0001, test_acc: 90.23%
Epoch 37 - loss: 1.0048, acc: 89.80% / test_loss: 1.0021, test_acc: 90.03%
Epoch 38 - loss: 1.0046, acc: 89.83% / test_loss: 0.9985, test_acc: 90.39%
Epoch 39 - loss: 1.0024, acc: 90.00% / test_loss: 1.0006, test_acc: 90.19%
Epoch 40 - loss: 1.0023, acc: 90.04% / test_loss: 1.0008, test_acc: 90.18%
Epoch 41 - loss: 1.0031, acc: 89.97% / test_loss: 1.0083, test_acc: 89.44%
Epoch 42 - loss: 1.0027, acc: 90.00% / test_loss: 1.0013, test_acc: 90.04%
Epoch 43 - loss: 1.0016, acc: 90.09% / test_loss: 1.0008, test_acc: 90.25%
Epoch 44 - loss: 1.0006, acc: 90.17% / test_loss: 0.9983, test_acc: 90.35%
Epoch 45 - loss: 1.0003, acc: 90.19% / test_loss: 0.9969, test_acc: 90.48%
Epoch 46 - loss: 1.0007, acc: 90.15% / test_loss: 0.9988, test_acc: 90.40%
Epoch 47 - loss: 1.0001, acc: 90.22% / test_loss: 0.9987, test_acc: 90.35%
Epoch 48 - loss: 0.9987, acc: 90.32% / test_loss: 0.9970, test_acc: 90.55%
Epoch 49 - loss: 0.9984, acc: 90.35% / test_loss: 0.9973, test_acc: 90.43%
Epoch 50 - loss: 1.0009, acc: 90.15% / test_loss: 0.9961, test_acc: 90.55%
Epoch 51 - loss: 0.9991, acc: 90.30% / test_loss: 0.9989, test_acc: 90.28%
Epoch 52 - loss: 0.9996, acc: 90.27% / test_loss: 0.9962, test_acc: 90.57%
Epoch 53 - loss: 0.9978, acc: 90.40% / test_loss: 0.9982, test_acc: 90.36%
Epoch 54 - loss: 0.9987, acc: 90.39% / test_loss: 0.9975, test_acc: 90.43%
Epoch 55 - loss: 0.9986, acc: 90.33% / test_loss: 0.9976, test_acc: 90.45%
Epoch 56 - loss: 0.9989, acc: 90.31% / test_loss: 0.9974, test_acc: 90.44%
Epoch 57 - loss: 0.9994, acc: 90.25% / test_loss: 1.0010, test_acc: 90.06%
Epoch 58 - loss: 0.9993, acc: 90.26% / test_loss: 0.9951, test_acc: 90.68%
Epoch 59 - loss: 0.9981, acc: 90.37% / test_loss: 0.9957, test_acc: 90.59%
Epoch 60 - loss: 0.9979, acc: 90.39% / test_loss: 0.9961, test_acc: 90.52%
Epoch 61 - loss: 0.9973, acc: 90.43% / test_loss: 0.9980, test_acc: 90.38%
Epoch 62 - loss: 0.9984, acc: 90.32% / test_loss: 0.9966, test_acc: 90.51%
Epoch 63 - loss: 0.9969, acc: 90.47% / test_loss: 0.9980, test_acc: 90.34%
Epoch 64 - loss: 0.9976, acc: 90.42% / test_loss: 1.0015, test_acc: 90.07%
Epoch 65 - loss: 0.9971, acc: 90.48% / test_loss: 0.9942, test_acc: 90.74%
Epoch 66 - loss: 0.9955, acc: 90.61% / test_loss: 0.9965, test_acc: 90.52%
Epoch 67 - loss: 0.9966, acc: 90.47% / test_loss: 0.9956, test_acc: 90.56%
Epoch 68 - loss: 0.9949, acc: 90.68% / test_loss: 0.9977, test_acc: 90.37%
Epoch 69 - loss: 0.9959, acc: 90.59% / test_loss: 0.9942, test_acc: 90.74%
Epoch 70 - loss: 0.9948, acc: 90.68% / test_loss: 0.9941, test_acc: 90.74%
Epoch 71 - loss: 0.9953, acc: 90.66% / test_loss: 0.9948, test_acc: 90.67%
Epoch 72 - loss: 0.9945, acc: 90.68% / test_loss: 0.9954, test_acc: 90.66%
Epoch 73 - loss: 0.9953, acc: 90.61% / test_loss: 0.9937, test_acc: 90.77%
Epoch 74 - loss: 0.9951, acc: 90.61% / test_loss: 0.9930, test_acc: 90.84%
Epoch 75 - loss: 0.9942, acc: 90.69% / test_loss: 0.9932, test_acc: 90.80%
Epoch 76 - loss: 0.9930, acc: 90.82% / test_loss: 0.9932, test_acc: 90.83%
Epoch 77 - loss: 0.9949, acc: 90.65% / test_loss: 0.9957, test_acc: 90.59%
Epoch 78 - loss: 0.9927, acc: 90.85% / test_loss: 0.9939, test_acc: 90.74%
Epoch 79 - loss: 0.9942, acc: 90.74% / test_loss: 0.9945, test_acc: 90.78%
Epoch 80 - loss: 0.9961, acc: 90.58% / test_loss: 0.9934, test_acc: 90.86%
Epoch 81 - loss: 0.9934, acc: 90.81% / test_loss: 0.9936, test_acc: 90.76%
Epoch 82 - loss: 0.9927, acc: 90.84% / test_loss: 0.9925, test_acc: 90.86%
Epoch 83 - loss: 0.9937, acc: 90.77% / test_loss: 0.9958, test_acc: 90.50%
Epoch 84 - loss: 0.9937, acc: 90.74% / test_loss: 0.9917, test_acc: 90.96%
Epoch 85 - loss: 0.9922, acc: 90.86% / test_loss: 0.9946, test_acc: 90.71%
Epoch 86 - loss: 0.9939, acc: 90.70% / test_loss: 0.9938, test_acc: 90.77%
Epoch 87 - loss: 0.9927, acc: 90.84% / test_loss: 0.9917, test_acc: 90.96%
Epoch 88 - loss: 0.9935, acc: 90.77% / test_loss: 0.9978, test_acc: 90.45%
Epoch 89 - loss: 0.9935, acc: 90.80% / test_loss: 0.9911, test_acc: 91.04%
Epoch 90 - loss: 0.9925, acc: 90.86% / test_loss: 0.9909, test_acc: 91.05%
Epoch 91 - loss: 0.9898, acc: 91.17% / test_loss: 0.9891, test_acc: 91.23%
Epoch 92 - loss: 0.9887, acc: 91.25% / test_loss: 0.9882, test_acc: 91.33%
Epoch 93 - loss: 0.9847, acc: 91.63% / test_loss: 0.9827, test_acc: 91.87%
Epoch 94 - loss: 0.9818, acc: 91.97% / test_loss: 0.9834, test_acc: 91.85%
Epoch 95 - loss: 0.9804, acc: 92.10% / test_loss: 0.9819, test_acc: 91.97%
Epoch 96 - loss: 0.9792, acc: 92.18% / test_loss: 0.9811, test_acc: 92.05%
Epoch 97 - loss: 0.9798, acc: 92.14% / test_loss: 0.9798, test_acc: 92.13%
Epoch 98 - loss: 0.9768, acc: 92.47% / test_loss: 0.9785, test_acc: 92.31%
Epoch 99 - loss: 0.9757, acc: 92.58% / test_loss: 0.9803, test_acc: 92.14%
Epoch 100 - loss: 0.9762, acc: 92.47% / test_loss: 0.9806, test_acc: 92.08%
Epoch 101 - loss: 0.9763, acc: 92.52% / test_loss: 0.9780, test_acc: 92.31%
Epoch 102 - loss: 0.9739, acc: 92.74% / test_loss: 0.9770, test_acc: 92.48%
Epoch 103 - loss: 0.9750, acc: 92.65% / test_loss: 0.9774, test_acc: 92.42%
Epoch 104 - loss: 0.9725, acc: 92.88% / test_loss: 0.9764, test_acc: 92.50%
Epoch 105 - loss: 0.9721, acc: 92.90% / test_loss: 0.9754, test_acc: 92.59%
Epoch 106 - loss: 0.9722, acc: 92.90% / test_loss: 0.9744, test_acc: 92.71%
Epoch 107 - loss: 0.9712, acc: 92.99% / test_loss: 0.9874, test_acc: 91.44%
Epoch 108 - loss: 0.9731, acc: 92.86% / test_loss: 0.9818, test_acc: 92.05%
Epoch 109 - loss: 0.9719, acc: 92.98% / test_loss: 0.9747, test_acc: 92.70%
Epoch 110 - loss: 0.9704, acc: 93.08% / test_loss: 0.9752, test_acc: 92.58%
Epoch 111 - loss: 0.9710, acc: 93.03% / test_loss: 0.9764, test_acc: 92.49%
Epoch 112 - loss: 0.9703, acc: 93.14% / test_loss: 0.9741, test_acc: 92.72%
Epoch 113 - loss: 0.9699, acc: 93.11% / test_loss: 0.9733, test_acc: 92.77%
Epoch 114 - loss: 0.9682, acc: 93.30% / test_loss: 0.9730, test_acc: 92.84%
Epoch 115 - loss: 0.9695, acc: 93.20% / test_loss: 0.9720, test_acc: 92.93%
Epoch 116 - loss: 0.9698, acc: 93.12% / test_loss: 0.9727, test_acc: 92.87%
Epoch 117 - loss: 0.9684, acc: 93.30% / test_loss: 0.9723, test_acc: 92.93%
Epoch 118 - loss: 0.9681, acc: 93.36% / test_loss: 0.9717, test_acc: 93.00%
Epoch 119 - loss: 0.9678, acc: 93.36% / test_loss: 0.9728, test_acc: 92.87%
Epoch 120 - loss: 0.9680, acc: 93.36% / test_loss: 0.9714, test_acc: 92.96%
Epoch 121 - loss: 0.9686, acc: 93.27% / test_loss: 0.9731, test_acc: 92.84%
Epoch 122 - loss: 0.9668, acc: 93.45% / test_loss: 0.9740, test_acc: 92.69%
Epoch 123 - loss: 0.9673, acc: 93.39% / test_loss: 0.9721, test_acc: 92.90%
Epoch 124 - loss: 0.9678, acc: 93.35% / test_loss: 0.9721, test_acc: 92.90%
Epoch 125 - loss: 0.9669, acc: 93.44% / test_loss: 0.9710, test_acc: 93.03%
Epoch 126 - loss: 0.9668, acc: 93.46% / test_loss: 0.9723, test_acc: 92.91%
Epoch 127 - loss: 0.9689, acc: 93.26% / test_loss: 0.9709, test_acc: 93.08%
Epoch 128 - loss: 0.9496, acc: 95.30% / test_loss: 0.9288, test_acc: 97.61%
Epoch 129 - loss: 0.9218, acc: 98.34% / test_loss: 0.9326, test_acc: 97.28%
Epoch 130 - loss: 0.9208, acc: 98.44% / test_loss: 0.9287, test_acc: 97.62%
Epoch 131 - loss: 0.9200, acc: 98.54% / test_loss: 0.9285, test_acc: 97.60%
Epoch 132 - loss: 0.9195, acc: 98.56% / test_loss: 0.9304, test_acc: 97.42%
Epoch 133 - loss: 0.9189, acc: 98.60% / test_loss: 0.9277, test_acc: 97.73%
Epoch 134 - loss: 0.9186, acc: 98.63% / test_loss: 0.9258, test_acc: 97.92%
Epoch 135 - loss: 0.9185, acc: 98.65% / test_loss: 0.9273, test_acc: 97.73%
Epoch 136 - loss: 0.9190, acc: 98.62% / test_loss: 0.9256, test_acc: 97.91%
Epoch 137 - loss: 0.9182, acc: 98.68% / test_loss: 0.9263, test_acc: 97.87%
Epoch 138 - loss: 0.9178, acc: 98.73% / test_loss: 0.9272, test_acc: 97.73%
Epoch 139 - loss: 0.9184, acc: 98.66% / test_loss: 0.9262, test_acc: 97.86%
Epoch 140 - loss: 0.9186, acc: 98.66% / test_loss: 0.9262, test_acc: 97.87%
Epoch 141 - loss: 0.9171, acc: 98.80% / test_loss: 0.9259, test_acc: 97.89%
Epoch 142 - loss: 0.9188, acc: 98.61% / test_loss: 0.9270, test_acc: 97.80%
Epoch 143 - loss: 0.9175, acc: 98.75% / test_loss: 0.9247, test_acc: 97.99%
Epoch 144 - loss: 0.9166, acc: 98.85% / test_loss: 0.9249, test_acc: 97.97%
Epoch 145 - loss: 0.9175, acc: 98.73% / test_loss: 0.9252, test_acc: 97.98%
Epoch 146 - loss: 0.9182, acc: 98.72% / test_loss: 0.9255, test_acc: 97.93%
Epoch 147 - loss: 0.9160, acc: 98.89% / test_loss: 0.9246, test_acc: 98.06%
Epoch 148 - loss: 0.9163, acc: 98.85% / test_loss: 0.9262, test_acc: 97.88%
Epoch 149 - loss: 0.9172, acc: 98.79% / test_loss: 0.9256, test_acc: 97.91%
Epoch 150 - loss: 0.9173, acc: 98.72% / test_loss: 0.9245, test_acc: 98.06%
Epoch 151 - loss: 0.9201, acc: 98.46% / test_loss: 0.9243, test_acc: 98.07%
Epoch 152 - loss: 0.9180, acc: 98.69% / test_loss: 0.9244, test_acc: 98.09%
Epoch 153 - loss: 0.9164, acc: 98.87% / test_loss: 0.9262, test_acc: 97.87%
Epoch 154 - loss: 0.9167, acc: 98.84% / test_loss: 0.9260, test_acc: 97.92%
Epoch 155 - loss: 0.9171, acc: 98.80% / test_loss: 0.9253, test_acc: 97.96%
Epoch 156 - loss: 0.9166, acc: 98.84% / test_loss: 0.9244, test_acc: 98.06%
Epoch 157 - loss: 0.9155, acc: 98.95% / test_loss: 0.9238, test_acc: 98.14%
Epoch 158 - loss: 0.9161, acc: 98.88% / test_loss: 0.9233, test_acc: 98.14%
Epoch 159 - loss: 0.9151, acc: 98.98% / test_loss: 0.9227, test_acc: 98.21%
Epoch 160 - loss: 0.9154, acc: 98.97% / test_loss: 0.9272, test_acc: 97.74%
Epoch 161 - loss: 0.9159, acc: 98.91% / test_loss: 0.9242, test_acc: 98.09%
Epoch 162 - loss: 0.9163, acc: 98.86% / test_loss: 0.9243, test_acc: 98.05%
Epoch 163 - loss: 0.9170, acc: 98.78% / test_loss: 0.9227, test_acc: 98.23%
Epoch 164 - loss: 0.9159, acc: 98.93% / test_loss: 0.9238, test_acc: 98.11%
Epoch 165 - loss: 0.9157, acc: 98.90% / test_loss: 0.9224, test_acc: 98.23%
Epoch 166 - loss: 0.9160, acc: 98.89% / test_loss: 0.9233, test_acc: 98.14%
Epoch 167 - loss: 0.9153, acc: 98.95% / test_loss: 0.9255, test_acc: 97.89%
Epoch 168 - loss: 0.9155, acc: 98.95% / test_loss: 0.9235, test_acc: 98.15%
Epoch 169 - loss: 0.9144, acc: 99.06% / test_loss: 0.9228, test_acc: 98.18%
Epoch 170 - loss: 0.9155, acc: 98.98% / test_loss: 0.9275, test_acc: 97.70%
Epoch 171 - loss: 0.9155, acc: 98.94% / test_loss: 0.9230, test_acc: 98.19%
Epoch 172 - loss: 0.9155, acc: 98.94% / test_loss: 0.9247, test_acc: 98.02%
Epoch 173 - loss: 0.9154, acc: 98.97% / test_loss: 0.9234, test_acc: 98.17%
Epoch 174 - loss: 0.9144, acc: 99.06% / test_loss: 0.9227, test_acc: 98.21%
Epoch 175 - loss: 0.9155, acc: 98.95% / test_loss: 0.9249, test_acc: 97.98%
Epoch 176 - loss: 0.9156, acc: 98.91% / test_loss: 0.9242, test_acc: 98.04%
Epoch 177 - loss: 0.9162, acc: 98.88% / test_loss: 0.9272, test_acc: 97.73%
Epoch 178 - loss: 0.9151, acc: 98.99% / test_loss: 0.9218, test_acc: 98.32%
Epoch 179 - loss: 0.9140, acc: 99.09% / test_loss: 0.9217, test_acc: 98.30%
Epoch 180 - loss: 0.9143, acc: 99.07% / test_loss: 0.9230, test_acc: 98.17%
Epoch 181 - loss: 0.9149, acc: 99.02% / test_loss: 0.9252, test_acc: 97.95%
Epoch 182 - loss: 0.9161, acc: 98.90% / test_loss: 0.9240, test_acc: 98.08%
Epoch 183 - loss: 0.9144, acc: 99.06% / test_loss: 0.9244, test_acc: 98.01%
Epoch 184 - loss: 0.9149, acc: 99.00% / test_loss: 0.9295, test_acc: 97.54%
Epoch 185 - loss: 0.9141, acc: 99.08% / test_loss: 0.9225, test_acc: 98.23%
Epoch 186 - loss: 0.9144, acc: 99.06% / test_loss: 0.9217, test_acc: 98.31%
Epoch 187 - loss: 0.9141, acc: 99.08% / test_loss: 0.9220, test_acc: 98.26%
Epoch 188 - loss: 0.9149, acc: 99.00% / test_loss: 0.9224, test_acc: 98.25%
Epoch 189 - loss: 0.9147, acc: 99.02% / test_loss: 0.9224, test_acc: 98.22%
Epoch 190 - loss: 0.9159, acc: 98.91% / test_loss: 0.9234, test_acc: 98.18%
Epoch 191 - loss: 0.9163, acc: 98.85% / test_loss: 0.9234, test_acc: 98.12%
Epoch 192 - loss: 0.9138, acc: 99.12% / test_loss: 0.9216, test_acc: 98.35%
Epoch 193 - loss: 0.9131, acc: 99.18% / test_loss: 0.9218, test_acc: 98.29%
Epoch 194 - loss: 0.9156, acc: 98.91% / test_loss: 0.9233, test_acc: 98.19%
Epoch 195 - loss: 0.9159, acc: 98.92% / test_loss: 0.9229, test_acc: 98.17%
Epoch 196 - loss: 0.9139, acc: 99.10% / test_loss: 0.9221, test_acc: 98.24%
Epoch 197 - loss: 0.9139, acc: 99.09% / test_loss: 0.9218, test_acc: 98.30%
Epoch 198 - loss: 0.9156, acc: 98.91% / test_loss: 0.9250, test_acc: 97.99%
Epoch 199 - loss: 0.9158, acc: 98.91% / test_loss: 0.9244, test_acc: 98.05%
Epoch 200 - loss: 0.9144, acc: 99.06% / test_loss: 0.9213, test_acc: 98.35%
Epoch 201 - loss: 0.9140, acc: 99.08% / test_loss: 0.9220, test_acc: 98.28%
Epoch 202 - loss: 0.9146, acc: 99.03% / test_loss: 0.9274, test_acc: 97.70%
Epoch 203 - loss: 0.9146, acc: 99.03% / test_loss: 0.9242, test_acc: 98.07%
Epoch 204 - loss: 0.9141, acc: 99.09% / test_loss: 0.9221, test_acc: 98.25%
Epoch 205 - loss: 0.9135, acc: 99.13% / test_loss: 0.9306, test_acc: 97.41%
Epoch 206 - loss: 0.9140, acc: 99.10% / test_loss: 0.9213, test_acc: 98.32%
Epoch 207 - loss: 0.9149, acc: 98.97% / test_loss: 0.9244, test_acc: 98.07%
Epoch 208 - loss: 0.9154, acc: 98.94% / test_loss: 0.9243, test_acc: 98.05%
Epoch 209 - loss: 0.9150, acc: 98.98% / test_loss: 0.9224, test_acc: 98.25%
Epoch 210 - loss: 0.9149, acc: 99.00% / test_loss: 0.9236, test_acc: 98.10%
Epoch 211 - loss: 0.9144, acc: 99.06% / test_loss: 0.9232, test_acc: 98.17%
Epoch 212 - loss: 0.9143, acc: 99.05% / test_loss: 0.9216, test_acc: 98.36%
Epoch 213 - loss: 0.9147, acc: 99.04% / test_loss: 0.9228, test_acc: 98.17%
Epoch 214 - loss: 0.9140, acc: 99.07% / test_loss: 0.9221, test_acc: 98.28%
Epoch 215 - loss: 0.9137, acc: 99.12% / test_loss: 0.9222, test_acc: 98.24%
Epoch 216 - loss: 0.9144, acc: 99.06% / test_loss: 0.9223, test_acc: 98.26%
Epoch 217 - loss: 0.9139, acc: 99.11% / test_loss: 0.9224, test_acc: 98.24%
Epoch 218 - loss: 0.9138, acc: 99.12% / test_loss: 0.9212, test_acc: 98.36%
Epoch 219 - loss: 0.9132, acc: 99.19% / test_loss: 0.9221, test_acc: 98.26%
Epoch 220 - loss: 0.9142, acc: 99.08% / test_loss: 0.9226, test_acc: 98.19%
Epoch 221 - loss: 0.9150, acc: 98.99% / test_loss: 0.9220, test_acc: 98.29%
Epoch 222 - loss: 0.9143, acc: 99.06% / test_loss: 0.9219, test_acc: 98.28%
Epoch 223 - loss: 0.9133, acc: 99.18% / test_loss: 0.9218, test_acc: 98.31%
Epoch 224 - loss: 0.9141, acc: 99.06% / test_loss: 0.9222, test_acc: 98.24%
Epoch 225 - loss: 0.9133, acc: 99.20% / test_loss: 0.9231, test_acc: 98.15%
Epoch 226 - loss: 0.9132, acc: 99.18% / test_loss: 0.9211, test_acc: 98.33%
Epoch 227 - loss: 0.9129, acc: 99.20% / test_loss: 0.9219, test_acc: 98.31%
Epoch 228 - loss: 0.9136, acc: 99.16% / test_loss: 0.9210, test_acc: 98.38%
Epoch 229 - loss: 0.9137, acc: 99.12% / test_loss: 0.9223, test_acc: 98.27%
Epoch 230 - loss: 0.9150, acc: 98.99% / test_loss: 0.9258, test_acc: 97.92%
Epoch 231 - loss: 0.9148, acc: 98.99% / test_loss: 0.9215, test_acc: 98.33%
Epoch 232 - loss: 0.9130, acc: 99.18% / test_loss: 0.9214, test_acc: 98.35%
Epoch 233 - loss: 0.9137, acc: 99.11% / test_loss: 0.9227, test_acc: 98.20%
Epoch 234 - loss: 0.9135, acc: 99.14% / test_loss: 0.9219, test_acc: 98.29%
Epoch 235 - loss: 0.9133, acc: 99.16% / test_loss: 0.9220, test_acc: 98.23%
Epoch 236 - loss: 0.9133, acc: 99.15% / test_loss: 0.9221, test_acc: 98.25%
Epoch 237 - loss: 0.9127, acc: 99.22% / test_loss: 0.9231, test_acc: 98.17%
Epoch 238 - loss: 0.9135, acc: 99.14% / test_loss: 0.9215, test_acc: 98.34%
Epoch 239 - loss: 0.9146, acc: 99.03% / test_loss: 0.9220, test_acc: 98.30%
Epoch 240 - loss: 0.9129, acc: 99.18% / test_loss: 0.9215, test_acc: 98.35%
Epoch 241 - loss: 0.9125, acc: 99.23% / test_loss: 0.9214, test_acc: 98.32%
Epoch 242 - loss: 0.9130, acc: 99.20% / test_loss: 0.9228, test_acc: 98.20%
Epoch 243 - loss: 0.9132, acc: 99.16% / test_loss: 0.9233, test_acc: 98.17%
Epoch 244 - loss: 0.9167, acc: 98.81% / test_loss: 0.9235, test_acc: 98.10%
Epoch 245 - loss: 0.9142, acc: 99.06% / test_loss: 0.9231, test_acc: 98.17%
Epoch 246 - loss: 0.9133, acc: 99.16% / test_loss: 0.9214, test_acc: 98.37%
Epoch 247 - loss: 0.9126, acc: 99.21% / test_loss: 0.9213, test_acc: 98.35%
Epoch 248 - loss: 0.9127, acc: 99.21% / test_loss: 0.9218, test_acc: 98.29%
Epoch 249 - loss: 0.9141, acc: 99.09% / test_loss: 0.9227, test_acc: 98.22%
Epoch 250 - loss: 0.9151, acc: 98.97% / test_loss: 0.9227, test_acc: 98.18%
Epoch 251 - loss: 0.9131, acc: 99.18% / test_loss: 0.9218, test_acc: 98.31%
Epoch 252 - loss: 0.9130, acc: 99.18% / test_loss: 0.9223, test_acc: 98.25%
Epoch 253 - loss: 0.9133, acc: 99.16% / test_loss: 0.9241, test_acc: 98.07%
Epoch 254 - loss: 0.9139, acc: 99.09% / test_loss: 0.9244, test_acc: 98.07%
Epoch 255 - loss: 0.9130, acc: 99.18% / test_loss: 0.9233, test_acc: 98.16%
Epoch 256 - loss: 0.9139, acc: 99.08% / test_loss: 0.9214, test_acc: 98.31%
Epoch 257 - loss: 0.9124, acc: 99.24% / test_loss: 0.9208, test_acc: 98.41%
Epoch 258 - loss: 0.9128, acc: 99.20% / test_loss: 0.9212, test_acc: 98.35%
Epoch 259 - loss: 0.9136, acc: 99.11% / test_loss: 0.9219, test_acc: 98.27%
Epoch 260 - loss: 0.9143, acc: 99.05% / test_loss: 0.9218, test_acc: 98.28%
Epoch 261 - loss: 0.9147, acc: 99.03% / test_loss: 0.9234, test_acc: 98.15%
Epoch 262 - loss: 0.9135, acc: 99.15% / test_loss: 0.9252, test_acc: 97.92%
Epoch 263 - loss: 0.9132, acc: 99.18% / test_loss: 0.9220, test_acc: 98.24%
Epoch 264 - loss: 0.9132, acc: 99.16% / test_loss: 0.9212, test_acc: 98.36%
Epoch 265 - loss: 0.9125, acc: 99.23% / test_loss: 0.9208, test_acc: 98.37%
Epoch 266 - loss: 0.9131, acc: 99.18% / test_loss: 0.9216, test_acc: 98.31%
Epoch 267 - loss: 0.9133, acc: 99.18% / test_loss: 0.9254, test_acc: 97.94%
Epoch 268 - loss: 0.9150, acc: 98.98% / test_loss: 0.9240, test_acc: 98.04%
Epoch 269 - loss: 0.9143, acc: 99.07% / test_loss: 0.9214, test_acc: 98.34%
Epoch 270 - loss: 0.9135, acc: 99.14% / test_loss: 0.9214, test_acc: 98.32%
Epoch 271 - loss: 0.9129, acc: 99.19% / test_loss: 0.9217, test_acc: 98.29%
Epoch 272 - loss: 0.9129, acc: 99.20% / test_loss: 0.9238, test_acc: 98.07%
Epoch 273 - loss: 0.9126, acc: 99.22% / test_loss: 0.9207, test_acc: 98.44%
Epoch 274 - loss: 0.9130, acc: 99.16% / test_loss: 0.9218, test_acc: 98.29%
Epoch 275 - loss: 0.9131, acc: 99.18% / test_loss: 0.9211, test_acc: 98.38%
Epoch 276 - loss: 0.9123, acc: 99.25% / test_loss: 0.9218, test_acc: 98.29%
Epoch 277 - loss: 0.9125, acc: 99.23% / test_loss: 0.9210, test_acc: 98.37%
Epoch 278 - loss: 0.9127, acc: 99.21% / test_loss: 0.9218, test_acc: 98.30%
Epoch 279 - loss: 0.9123, acc: 99.26% / test_loss: 0.9224, test_acc: 98.26%
Epoch 280 - loss: 0.9138, acc: 99.11% / test_loss: 0.9212, test_acc: 98.36%
Epoch 281 - loss: 0.9140, acc: 99.09% / test_loss: 0.9226, test_acc: 98.23%
Epoch 282 - loss: 0.9132, acc: 99.15% / test_loss: 0.9207, test_acc: 98.41%
Epoch 283 - loss: 0.9134, acc: 99.15% / test_loss: 0.9247, test_acc: 97.98%
Epoch 284 - loss: 0.9136, acc: 99.14% / test_loss: 0.9232, test_acc: 98.21%
Epoch 285 - loss: 0.9130, acc: 99.18% / test_loss: 0.9216, test_acc: 98.32%
Epoch 286 - loss: 0.9127, acc: 99.21% / test_loss: 0.9230, test_acc: 98.17%
Epoch 287 - loss: 0.9125, acc: 99.22% / test_loss: 0.9223, test_acc: 98.24%
Epoch 288 - loss: 0.9123, acc: 99.25% / test_loss: 0.9223, test_acc: 98.24%
Epoch 289 - loss: 0.9129, acc: 99.19% / test_loss: 0.9220, test_acc: 98.27%
Epoch 290 - loss: 0.9134, acc: 99.14% / test_loss: 0.9287, test_acc: 97.61%
Epoch 291 - loss: 0.9158, acc: 98.88% / test_loss: 0.9212, test_acc: 98.36%
Epoch 292 - loss: 0.9130, acc: 99.18% / test_loss: 0.9222, test_acc: 98.27%
Epoch 293 - loss: 0.9128, acc: 99.19% / test_loss: 0.9211, test_acc: 98.36%
Epoch 294 - loss: 0.9143, acc: 99.06% / test_loss: 0.9214, test_acc: 98.32%
Epoch 295 - loss: 0.9125, acc: 99.23% / test_loss: 0.9204, test_acc: 98.43%
Epoch 296 - loss: 0.9142, acc: 99.05% / test_loss: 0.9292, test_acc: 97.55%
Epoch 297 - loss: 0.9153, acc: 98.94% / test_loss: 0.9221, test_acc: 98.28%
Epoch 298 - loss: 0.9126, acc: 99.22% / test_loss: 0.9209, test_acc: 98.41%
Epoch 299 - loss: 0.9122, acc: 99.27% / test_loss: 0.9209, test_acc: 98.38%
Epoch 300 - loss: 0.9134, acc: 99.15% / test_loss: 0.9239, test_acc: 98.12%
Epoch 301 - loss: 0.9142, acc: 99.06% / test_loss: 0.9225, test_acc: 98.25%
Epoch 302 - loss: 0.9128, acc: 99.21% / test_loss: 0.9222, test_acc: 98.23%
Epoch 303 - loss: 0.9121, acc: 99.28% / test_loss: 0.9207, test_acc: 98.39%
Epoch 304 - loss: 0.9123, acc: 99.25% / test_loss: 0.9242, test_acc: 98.08%
Epoch 305 - loss: 0.9137, acc: 99.12% / test_loss: 0.9222, test_acc: 98.29%
Epoch 306 - loss: 0.9142, acc: 99.07% / test_loss: 0.9225, test_acc: 98.24%
Epoch 307 - loss: 0.9130, acc: 99.18% / test_loss: 0.9247, test_acc: 98.01%
Epoch 308 - loss: 0.9133, acc: 99.15% / test_loss: 0.9220, test_acc: 98.28%
Epoch 309 - loss: 0.9132, acc: 99.17% / test_loss: 0.9225, test_acc: 98.21%
Epoch 310 - loss: 0.9135, acc: 99.13% / test_loss: 0.9237, test_acc: 98.10%
Epoch 311 - loss: 0.9126, acc: 99.24% / test_loss: 0.9220, test_acc: 98.26%
Epoch 312 - loss: 0.9123, acc: 99.24% / test_loss: 0.9239, test_acc: 98.10%
Epoch 313 - loss: 0.9142, acc: 99.08% / test_loss: 0.9216, test_acc: 98.35%
Epoch 314 - loss: 0.9129, acc: 99.21% / test_loss: 0.9237, test_acc: 98.10%
Epoch 315 - loss: 0.9128, acc: 99.21% / test_loss: 0.9205, test_acc: 98.44%
Epoch 316 - loss: 0.9126, acc: 99.24% / test_loss: 0.9206, test_acc: 98.44%
Epoch 317 - loss: 0.9127, acc: 99.21% / test_loss: 0.9221, test_acc: 98.24%
Epoch 318 - loss: 0.9123, acc: 99.25% / test_loss: 0.9221, test_acc: 98.25%
Epoch 319 - loss: 0.9151, acc: 98.95% / test_loss: 0.9243, test_acc: 98.01%
Epoch 320 - loss: 0.9151, acc: 98.97% / test_loss: 0.9223, test_acc: 98.25%
Epoch 321 - loss: 0.9132, acc: 99.16% / test_loss: 0.9330, test_acc: 97.15%
Epoch 322 - loss: 0.9130, acc: 99.18% / test_loss: 0.9205, test_acc: 98.44%
Epoch 323 - loss: 0.9128, acc: 99.21% / test_loss: 0.9214, test_acc: 98.33%
Epoch 324 - loss: 0.9133, acc: 99.15% / test_loss: 0.9213, test_acc: 98.36%
Epoch 325 - loss: 0.9126, acc: 99.23% / test_loss: 0.9206, test_acc: 98.44%
Epoch 326 - loss: 0.9120, acc: 99.29% / test_loss: 0.9231, test_acc: 98.20%
Epoch 327 - loss: 0.9130, acc: 99.19% / test_loss: 0.9220, test_acc: 98.28%
Epoch 328 - loss: 0.9141, acc: 99.07% / test_loss: 0.9224, test_acc: 98.23%
Epoch 329 - loss: 0.9130, acc: 99.19% / test_loss: 0.9216, test_acc: 98.33%
Epoch 330 - loss: 0.9134, acc: 99.14% / test_loss: 0.9203, test_acc: 98.43%
Epoch 331 - loss: 0.9136, acc: 99.12% / test_loss: 0.9221, test_acc: 98.24%
Epoch 332 - loss: 0.9137, acc: 99.11% / test_loss: 0.9216, test_acc: 98.34%
Epoch 333 - loss: 0.9126, acc: 99.23% / test_loss: 0.9225, test_acc: 98.20%
Epoch 334 - loss: 0.9132, acc: 99.17% / test_loss: 0.9211, test_acc: 98.37%
Epoch 335 - loss: 0.9124, acc: 99.24% / test_loss: 0.9220, test_acc: 98.29%
Epoch 336 - loss: 0.9122, acc: 99.27% / test_loss: 0.9223, test_acc: 98.26%
Epoch 337 - loss: 0.9122, acc: 99.28% / test_loss: 0.9219, test_acc: 98.32%
Epoch 338 - loss: 0.9119, acc: 99.30% / test_loss: 0.9209, test_acc: 98.37%
Epoch 339 - loss: 0.9139, acc: 99.09% / test_loss: 0.9252, test_acc: 97.95%
Epoch 340 - loss: 0.9131, acc: 99.18% / test_loss: 0.9224, test_acc: 98.24%
Epoch 341 - loss: 0.9146, acc: 99.02% / test_loss: 0.9214, test_acc: 98.31%
Epoch 342 - loss: 0.9133, acc: 99.15% / test_loss: 0.9206, test_acc: 98.43%
Epoch 343 - loss: 0.9118, acc: 99.31% / test_loss: 0.9204, test_acc: 98.42%
Epoch 344 - loss: 0.9128, acc: 99.21% / test_loss: 0.9223, test_acc: 98.23%
Epoch 345 - loss: 0.9127, acc: 99.21% / test_loss: 0.9201, test_acc: 98.44%
Epoch 346 - loss: 0.9120, acc: 99.29% / test_loss: 0.9208, test_acc: 98.41%
Epoch 347 - loss: 0.9118, acc: 99.29% / test_loss: 0.9235, test_acc: 98.12%
Epoch 348 - loss: 0.9119, acc: 99.31% / test_loss: 0.9206, test_acc: 98.40%
Epoch 349 - loss: 0.9137, acc: 99.12% / test_loss: 0.9219, test_acc: 98.26%
Epoch 350 - loss: 0.9141, acc: 99.06% / test_loss: 0.9224, test_acc: 98.24%
Epoch 351 - loss: 0.9131, acc: 99.17% / test_loss: 0.9217, test_acc: 98.29%
Epoch 352 - loss: 0.9123, acc: 99.26% / test_loss: 0.9220, test_acc: 98.29%
Epoch 353 - loss: 0.9128, acc: 99.19% / test_loss: 0.9221, test_acc: 98.28%
Epoch 354 - loss: 0.9137, acc: 99.11% / test_loss: 0.9221, test_acc: 98.27%
Epoch 355 - loss: 0.9127, acc: 99.21% / test_loss: 0.9213, test_acc: 98.36%
Epoch 356 - loss: 0.9141, acc: 99.08% / test_loss: 0.9220, test_acc: 98.26%
Epoch 357 - loss: 0.9126, acc: 99.22% / test_loss: 0.9217, test_acc: 98.27%
Epoch 358 - loss: 0.9124, acc: 99.24% / test_loss: 0.9217, test_acc: 98.30%
Epoch 359 - loss: 0.9127, acc: 99.21% / test_loss: 0.9241, test_acc: 98.07%
Epoch 360 - loss: 0.9132, acc: 99.18% / test_loss: 0.9208, test_acc: 98.40%
Epoch 361 - loss: 0.9121, acc: 99.28% / test_loss: 0.9204, test_acc: 98.43%
Epoch 362 - loss: 0.9138, acc: 99.10% / test_loss: 0.9296, test_acc: 97.52%
Epoch 363 - loss: 0.9144, acc: 99.05% / test_loss: 0.9206, test_acc: 98.38%
Epoch 364 - loss: 0.9131, acc: 99.17% / test_loss: 0.9223, test_acc: 98.23%
Epoch 365 - loss: 0.9122, acc: 99.26% / test_loss: 0.9226, test_acc: 98.22%
Epoch 366 - loss: 0.9124, acc: 99.24% / test_loss: 0.9224, test_acc: 98.25%
Epoch 367 - loss: 0.9122, acc: 99.26% / test_loss: 0.9212, test_acc: 98.35%
Epoch 368 - loss: 0.9128, acc: 99.21% / test_loss: 0.9223, test_acc: 98.26%
Epoch 369 - loss: 0.9122, acc: 99.28% / test_loss: 0.9209, test_acc: 98.40%
Epoch 370 - loss: 0.9135, acc: 99.12% / test_loss: 0.9217, test_acc: 98.29%
Epoch 371 - loss: 0.9136, acc: 99.12% / test_loss: 0.9324, test_acc: 97.21%
Epoch 372 - loss: 0.9135, acc: 99.12% / test_loss: 0.9221, test_acc: 98.26%
Epoch 373 - loss: 0.9122, acc: 99.27% / test_loss: 0.9208, test_acc: 98.37%
Epoch 374 - loss: 0.9121, acc: 99.28% / test_loss: 0.9246, test_acc: 97.99%
Epoch 375 - loss: 0.9128, acc: 99.21% / test_loss: 0.9214, test_acc: 98.31%
Epoch 376 - loss: 0.9126, acc: 99.23% / test_loss: 0.9214, test_acc: 98.30%
Epoch 377 - loss: 0.9132, acc: 99.15% / test_loss: 0.9317, test_acc: 97.29%
Epoch 378 - loss: 0.9134, acc: 99.15% / test_loss: 0.9206, test_acc: 98.44%
Epoch 379 - loss: 0.9137, acc: 99.12% / test_loss: 0.9242, test_acc: 98.04%
Epoch 380 - loss: 0.9134, acc: 99.13% / test_loss: 0.9215, test_acc: 98.32%
Epoch 381 - loss: 0.9120, acc: 99.29% / test_loss: 0.9210, test_acc: 98.38%
Epoch 382 - loss: 0.9120, acc: 99.28% / test_loss: 0.9230, test_acc: 98.20%
Epoch 383 - loss: 0.9151, acc: 98.97% / test_loss: 0.9205, test_acc: 98.42%
Epoch 384 - loss: 0.9120, acc: 99.28% / test_loss: 0.9203, test_acc: 98.45%
Epoch 385 - loss: 0.9121, acc: 99.28% / test_loss: 0.9205, test_acc: 98.42%
Epoch 386 - loss: 0.9119, acc: 99.31% / test_loss: 0.9206, test_acc: 98.43%
Epoch 387 - loss: 0.9117, acc: 99.31% / test_loss: 0.9201, test_acc: 98.48%
Epoch 388 - loss: 0.9117, acc: 99.31% / test_loss: 0.9201, test_acc: 98.47%
Epoch 389 - loss: 0.9117, acc: 99.31% / test_loss: 0.9201, test_acc: 98.50%
Epoch 390 - loss: 0.9126, acc: 99.22% / test_loss: 0.9225, test_acc: 98.24%
Epoch 391 - loss: 0.9139, acc: 99.09% / test_loss: 0.9206, test_acc: 98.42%
Epoch 392 - loss: 0.9127, acc: 99.21% / test_loss: 0.9245, test_acc: 97.98%
Epoch 393 - loss: 0.9134, acc: 99.14% / test_loss: 0.9218, test_acc: 98.30%
Epoch 394 - loss: 0.9144, acc: 99.04% / test_loss: 0.9235, test_acc: 98.14%
Epoch 395 - loss: 0.9127, acc: 99.21% / test_loss: 0.9205, test_acc: 98.42%
Epoch 396 - loss: 0.9132, acc: 99.15% / test_loss: 0.9212, test_acc: 98.35%
Epoch 397 - loss: 0.9120, acc: 99.28% / test_loss: 0.9213, test_acc: 98.36%
Epoch 398 - loss: 0.9124, acc: 99.24% / test_loss: 0.9256, test_acc: 97.92%
Epoch 399 - loss: 0.9122, acc: 99.26% / test_loss: 0.9213, test_acc: 98.34%
Epoch 400 - loss: 0.9121, acc: 99.25% / test_loss: 0.9209, test_acc: 98.37%
Best test accuracy 98.50% in epoch 389.
----------------------------------------------------------------------------------------------------
Run 10
Epoch 1 - loss: 1.3538, acc: 55.46% / test_loss: 1.2185, test_acc: 69.91%
Epoch 2 - loss: 1.1334, acc: 78.07% / test_loss: 1.0875, test_acc: 82.10%
Epoch 3 - loss: 1.0934, acc: 81.38% / test_loss: 1.0822, test_acc: 82.54%
Epoch 4 - loss: 1.0672, acc: 84.07% / test_loss: 1.0360, test_acc: 87.45%
Epoch 5 - loss: 1.0413, acc: 86.66% / test_loss: 1.0255, test_acc: 88.12%
Epoch 6 - loss: 1.0356, acc: 87.13% / test_loss: 1.0216, test_acc: 88.35%
Epoch 7 - loss: 1.0274, acc: 87.90% / test_loss: 1.0190, test_acc: 88.74%
Epoch 8 - loss: 1.0255, acc: 88.00% / test_loss: 1.0168, test_acc: 88.92%
Epoch 9 - loss: 1.0236, acc: 88.15% / test_loss: 1.0211, test_acc: 88.60%
Epoch 10 - loss: 1.0217, acc: 88.34% / test_loss: 1.0129, test_acc: 89.17%
Epoch 11 - loss: 1.0205, acc: 88.46% / test_loss: 1.0176, test_acc: 88.82%
Epoch 12 - loss: 1.0178, acc: 88.66% / test_loss: 1.0116, test_acc: 89.26%
Epoch 13 - loss: 1.0160, acc: 88.82% / test_loss: 1.0076, test_acc: 89.63%
Epoch 14 - loss: 1.0156, acc: 88.87% / test_loss: 1.0111, test_acc: 89.27%
Epoch 15 - loss: 1.0143, acc: 88.95% / test_loss: 1.0117, test_acc: 89.22%
Epoch 16 - loss: 1.0137, acc: 89.03% / test_loss: 1.0095, test_acc: 89.51%
Epoch 17 - loss: 1.0116, acc: 89.22% / test_loss: 1.0060, test_acc: 89.79%
Epoch 18 - loss: 1.0098, acc: 89.39% / test_loss: 1.0057, test_acc: 89.75%
Epoch 19 - loss: 1.0108, acc: 89.25% / test_loss: 1.0024, test_acc: 90.05%
Epoch 20 - loss: 1.0085, acc: 89.49% / test_loss: 1.0048, test_acc: 89.86%
Epoch 21 - loss: 1.0070, acc: 89.63% / test_loss: 1.0020, test_acc: 90.13%
Epoch 22 - loss: 1.0076, acc: 89.54% / test_loss: 1.0043, test_acc: 89.86%
Epoch 23 - loss: 1.0076, acc: 89.59% / test_loss: 1.0021, test_acc: 90.15%
Epoch 24 - loss: 1.0066, acc: 89.66% / test_loss: 1.0027, test_acc: 90.05%
Epoch 25 - loss: 1.0057, acc: 89.73% / test_loss: 1.0039, test_acc: 89.92%
Epoch 26 - loss: 0.9962, acc: 90.82% / test_loss: 0.9638, test_acc: 94.20%
Epoch 27 - loss: 0.9660, acc: 94.02% / test_loss: 0.9626, test_acc: 94.34%
Epoch 28 - loss: 0.9631, acc: 94.28% / test_loss: 0.9627, test_acc: 94.25%
Epoch 29 - loss: 0.9636, acc: 94.18% / test_loss: 0.9642, test_acc: 94.14%
Epoch 30 - loss: 0.9608, acc: 94.49% / test_loss: 0.9584, test_acc: 94.73%
Epoch 31 - loss: 0.9621, acc: 94.31% / test_loss: 0.9577, test_acc: 94.78%
Epoch 32 - loss: 0.9611, acc: 94.42% / test_loss: 0.9609, test_acc: 94.47%
Epoch 33 - loss: 0.9596, acc: 94.56% / test_loss: 0.9582, test_acc: 94.71%
Epoch 34 - loss: 0.9598, acc: 94.58% / test_loss: 0.9591, test_acc: 94.65%
Epoch 35 - loss: 0.9602, acc: 94.51% / test_loss: 0.9583, test_acc: 94.72%
Epoch 36 - loss: 0.9595, acc: 94.59% / test_loss: 0.9569, test_acc: 94.84%
Epoch 37 - loss: 0.9592, acc: 94.59% / test_loss: 0.9571, test_acc: 94.81%
Epoch 38 - loss: 0.9588, acc: 94.63% / test_loss: 0.9567, test_acc: 94.82%
Epoch 39 - loss: 0.9598, acc: 94.53% / test_loss: 0.9581, test_acc: 94.71%
Epoch 40 - loss: 0.9589, acc: 94.59% / test_loss: 0.9598, test_acc: 94.55%
Epoch 41 - loss: 0.9589, acc: 94.64% / test_loss: 0.9556, test_acc: 94.96%
Epoch 42 - loss: 0.9580, acc: 94.74% / test_loss: 0.9542, test_acc: 95.05%
Epoch 43 - loss: 0.9572, acc: 94.79% / test_loss: 0.9557, test_acc: 94.93%
Epoch 44 - loss: 0.9564, acc: 94.87% / test_loss: 0.9589, test_acc: 94.66%
Epoch 45 - loss: 0.9575, acc: 94.73% / test_loss: 0.9594, test_acc: 94.53%
Epoch 46 - loss: 0.9556, acc: 94.93% / test_loss: 0.9543, test_acc: 95.06%
Epoch 47 - loss: 0.9568, acc: 94.81% / test_loss: 0.9538, test_acc: 95.08%
Epoch 48 - loss: 0.9558, acc: 94.92% / test_loss: 0.9536, test_acc: 95.13%
Epoch 49 - loss: 0.9555, acc: 94.97% / test_loss: 0.9544, test_acc: 95.08%
Epoch 50 - loss: 0.9544, acc: 95.09% / test_loss: 0.9597, test_acc: 94.60%
Epoch 51 - loss: 0.9567, acc: 94.83% / test_loss: 0.9525, test_acc: 95.26%
Epoch 52 - loss: 0.9533, acc: 95.18% / test_loss: 0.9529, test_acc: 95.20%
Epoch 53 - loss: 0.9555, acc: 94.93% / test_loss: 0.9571, test_acc: 94.77%
Epoch 54 - loss: 0.9544, acc: 95.10% / test_loss: 0.9551, test_acc: 95.00%
Epoch 55 - loss: 0.9534, acc: 95.15% / test_loss: 0.9525, test_acc: 95.24%
Epoch 56 - loss: 0.9528, acc: 95.20% / test_loss: 0.9522, test_acc: 95.24%
Epoch 57 - loss: 0.9541, acc: 95.10% / test_loss: 0.9524, test_acc: 95.24%
Epoch 58 - loss: 0.9536, acc: 95.14% / test_loss: 0.9529, test_acc: 95.24%
Epoch 59 - loss: 0.9528, acc: 95.24% / test_loss: 0.9544, test_acc: 95.02%
Epoch 60 - loss: 0.9542, acc: 95.06% / test_loss: 0.9545, test_acc: 94.99%
Epoch 61 - loss: 0.9529, acc: 95.21% / test_loss: 0.9518, test_acc: 95.33%
Epoch 62 - loss: 0.9519, acc: 95.30% / test_loss: 0.9524, test_acc: 95.25%
Epoch 63 - loss: 0.9536, acc: 95.12% / test_loss: 0.9491, test_acc: 95.58%
Epoch 64 - loss: 0.9491, acc: 95.60% / test_loss: 0.9494, test_acc: 95.58%
Epoch 65 - loss: 0.9497, acc: 95.55% / test_loss: 0.9526, test_acc: 95.23%
Epoch 66 - loss: 0.9495, acc: 95.57% / test_loss: 0.9505, test_acc: 95.48%
Epoch 67 - loss: 0.9479, acc: 95.73% / test_loss: 0.9487, test_acc: 95.63%
Epoch 68 - loss: 0.9473, acc: 95.77% / test_loss: 0.9482, test_acc: 95.67%
Epoch 69 - loss: 0.9468, acc: 95.82% / test_loss: 0.9494, test_acc: 95.55%
Epoch 70 - loss: 0.9491, acc: 95.62% / test_loss: 0.9492, test_acc: 95.58%
Epoch 71 - loss: 0.9485, acc: 95.63% / test_loss: 0.9491, test_acc: 95.58%
Epoch 72 - loss: 0.9482, acc: 95.67% / test_loss: 0.9564, test_acc: 94.81%
Epoch 73 - loss: 0.9472, acc: 95.75% / test_loss: 0.9506, test_acc: 95.42%
Epoch 74 - loss: 0.9463, acc: 95.86% / test_loss: 0.9514, test_acc: 95.32%
Epoch 75 - loss: 0.9458, acc: 95.92% / test_loss: 0.9467, test_acc: 95.83%
Epoch 76 - loss: 0.9461, acc: 95.88% / test_loss: 0.9475, test_acc: 95.73%
Epoch 77 - loss: 0.9470, acc: 95.80% / test_loss: 0.9526, test_acc: 95.22%
Epoch 78 - loss: 0.9478, acc: 95.70% / test_loss: 0.9491, test_acc: 95.58%
Epoch 79 - loss: 0.9469, acc: 95.77% / test_loss: 0.9492, test_acc: 95.55%
Epoch 80 - loss: 0.9461, acc: 95.86% / test_loss: 0.9485, test_acc: 95.62%
Epoch 81 - loss: 0.9472, acc: 95.79% / test_loss: 0.9470, test_acc: 95.82%
Epoch 82 - loss: 0.9469, acc: 95.85% / test_loss: 0.9471, test_acc: 95.79%
Epoch 83 - loss: 0.9454, acc: 95.95% / test_loss: 0.9464, test_acc: 95.84%
Epoch 84 - loss: 0.9464, acc: 95.87% / test_loss: 0.9468, test_acc: 95.80%
Epoch 85 - loss: 0.9453, acc: 95.96% / test_loss: 0.9507, test_acc: 95.41%
Epoch 86 - loss: 0.9464, acc: 95.83% / test_loss: 0.9486, test_acc: 95.61%
Epoch 87 - loss: 0.9467, acc: 95.82% / test_loss: 0.9480, test_acc: 95.68%
Epoch 88 - loss: 0.9459, acc: 95.89% / test_loss: 0.9470, test_acc: 95.78%
Epoch 89 - loss: 0.9460, acc: 95.89% / test_loss: 0.9480, test_acc: 95.67%
Epoch 90 - loss: 0.9464, acc: 95.85% / test_loss: 0.9492, test_acc: 95.57%
Epoch 91 - loss: 0.9461, acc: 95.88% / test_loss: 0.9467, test_acc: 95.80%
Epoch 92 - loss: 0.9451, acc: 95.97% / test_loss: 0.9471, test_acc: 95.76%
Epoch 93 - loss: 0.9457, acc: 95.92% / test_loss: 0.9487, test_acc: 95.64%
Epoch 94 - loss: 0.9473, acc: 95.77% / test_loss: 0.9484, test_acc: 95.64%
Epoch 95 - loss: 0.9453, acc: 95.97% / test_loss: 0.9470, test_acc: 95.77%
Epoch 96 - loss: 0.9466, acc: 95.82% / test_loss: 0.9494, test_acc: 95.54%
Epoch 97 - loss: 0.9461, acc: 95.86% / test_loss: 0.9475, test_acc: 95.76%
Epoch 98 - loss: 0.9456, acc: 95.92% / test_loss: 0.9481, test_acc: 95.68%
Epoch 99 - loss: 0.9464, acc: 95.83% / test_loss: 0.9474, test_acc: 95.76%
Epoch 100 - loss: 0.9459, acc: 95.89% / test_loss: 0.9469, test_acc: 95.78%
Epoch 101 - loss: 0.9453, acc: 95.95% / test_loss: 0.9483, test_acc: 95.66%
Epoch 102 - loss: 0.9450, acc: 95.99% / test_loss: 0.9468, test_acc: 95.79%
Epoch 103 - loss: 0.9444, acc: 96.05% / test_loss: 0.9476, test_acc: 95.67%
Epoch 104 - loss: 0.9466, acc: 95.81% / test_loss: 0.9475, test_acc: 95.73%
Epoch 105 - loss: 0.9461, acc: 95.88% / test_loss: 0.9478, test_acc: 95.70%
Epoch 106 - loss: 0.9450, acc: 95.99% / test_loss: 0.9466, test_acc: 95.80%
Epoch 107 - loss: 0.9453, acc: 95.93% / test_loss: 0.9473, test_acc: 95.74%
Epoch 108 - loss: 0.9446, acc: 96.02% / test_loss: 0.9473, test_acc: 95.73%
Epoch 109 - loss: 0.9488, acc: 95.60% / test_loss: 0.9468, test_acc: 95.82%
Epoch 110 - loss: 0.9476, acc: 95.73% / test_loss: 0.9489, test_acc: 95.61%
Epoch 111 - loss: 0.9462, acc: 95.85% / test_loss: 0.9461, test_acc: 95.87%
Epoch 112 - loss: 0.9449, acc: 96.00% / test_loss: 0.9466, test_acc: 95.86%
Epoch 113 - loss: 0.9471, acc: 95.78% / test_loss: 0.9527, test_acc: 95.20%
Epoch 114 - loss: 0.9456, acc: 95.92% / test_loss: 0.9480, test_acc: 95.67%
Epoch 115 - loss: 0.9452, acc: 95.97% / test_loss: 0.9484, test_acc: 95.66%
Epoch 116 - loss: 0.9452, acc: 95.95% / test_loss: 0.9456, test_acc: 95.91%
Epoch 117 - loss: 0.9442, acc: 96.06% / test_loss: 0.9466, test_acc: 95.82%
Epoch 118 - loss: 0.9452, acc: 95.98% / test_loss: 0.9462, test_acc: 95.87%
Epoch 119 - loss: 0.9446, acc: 96.02% / test_loss: 0.9461, test_acc: 95.89%
Epoch 120 - loss: 0.9444, acc: 96.04% / test_loss: 0.9463, test_acc: 95.84%
Epoch 121 - loss: 0.9439, acc: 96.10% / test_loss: 0.9463, test_acc: 95.82%
Epoch 122 - loss: 0.9451, acc: 95.98% / test_loss: 0.9463, test_acc: 95.87%
Epoch 123 - loss: 0.9457, acc: 95.90% / test_loss: 0.9474, test_acc: 95.74%
Epoch 124 - loss: 0.9446, acc: 96.03% / test_loss: 0.9469, test_acc: 95.79%
Epoch 125 - loss: 0.9465, acc: 95.82% / test_loss: 0.9458, test_acc: 95.88%
Epoch 126 - loss: 0.9441, acc: 96.08% / test_loss: 0.9458, test_acc: 95.92%
Epoch 127 - loss: 0.9441, acc: 96.07% / test_loss: 0.9459, test_acc: 95.89%
Epoch 128 - loss: 0.9463, acc: 95.86% / test_loss: 0.9540, test_acc: 95.08%
Epoch 129 - loss: 0.9464, acc: 95.85% / test_loss: 0.9472, test_acc: 95.74%
Epoch 130 - loss: 0.9461, acc: 95.87% / test_loss: 0.9458, test_acc: 95.91%
Epoch 131 - loss: 0.9448, acc: 95.99% / test_loss: 0.9492, test_acc: 95.56%
Epoch 132 - loss: 0.9434, acc: 96.11% / test_loss: 0.9438, test_acc: 96.08%
Epoch 133 - loss: 0.9427, acc: 96.23% / test_loss: 0.9444, test_acc: 96.05%
Epoch 134 - loss: 0.9417, acc: 96.31% / test_loss: 0.9428, test_acc: 96.21%
Epoch 135 - loss: 0.9402, acc: 96.46% / test_loss: 0.9405, test_acc: 96.42%
Epoch 136 - loss: 0.9370, acc: 96.77% / test_loss: 0.9408, test_acc: 96.38%
Epoch 137 - loss: 0.9366, acc: 96.82% / test_loss: 0.9386, test_acc: 96.65%
Epoch 138 - loss: 0.9343, acc: 97.06% / test_loss: 0.9376, test_acc: 96.72%
Epoch 139 - loss: 0.9329, acc: 97.19% / test_loss: 0.9366, test_acc: 96.83%
Epoch 140 - loss: 0.9326, acc: 97.21% / test_loss: 0.9364, test_acc: 96.87%
Epoch 141 - loss: 0.9312, acc: 97.37% / test_loss: 0.9392, test_acc: 96.57%
Epoch 142 - loss: 0.9298, acc: 97.51% / test_loss: 0.9346, test_acc: 97.04%
Epoch 143 - loss: 0.9309, acc: 97.38% / test_loss: 0.9339, test_acc: 97.10%
Epoch 144 - loss: 0.9295, acc: 97.52% / test_loss: 0.9346, test_acc: 97.04%
Epoch 145 - loss: 0.9295, acc: 97.52% / test_loss: 0.9330, test_acc: 97.17%
Epoch 146 - loss: 0.9266, acc: 97.83% / test_loss: 0.9345, test_acc: 97.02%
Epoch 147 - loss: 0.9265, acc: 97.86% / test_loss: 0.9324, test_acc: 97.24%
Epoch 148 - loss: 0.9290, acc: 97.58% / test_loss: 0.9328, test_acc: 97.19%
Epoch 149 - loss: 0.9267, acc: 97.81% / test_loss: 0.9331, test_acc: 97.13%
Epoch 150 - loss: 0.9266, acc: 97.82% / test_loss: 0.9309, test_acc: 97.37%
Epoch 151 - loss: 0.9265, acc: 97.86% / test_loss: 0.9319, test_acc: 97.25%
Epoch 152 - loss: 0.9260, acc: 97.87% / test_loss: 0.9321, test_acc: 97.26%
Epoch 153 - loss: 0.9276, acc: 97.70% / test_loss: 0.9314, test_acc: 97.32%
Epoch 154 - loss: 0.9261, acc: 97.89% / test_loss: 0.9314, test_acc: 97.34%
Epoch 155 - loss: 0.9255, acc: 97.91% / test_loss: 0.9309, test_acc: 97.39%
Epoch 156 - loss: 0.9247, acc: 98.00% / test_loss: 0.9322, test_acc: 97.23%
Epoch 157 - loss: 0.9253, acc: 97.96% / test_loss: 0.9318, test_acc: 97.34%
Epoch 158 - loss: 0.9265, acc: 97.82% / test_loss: 0.9306, test_acc: 97.40%
Epoch 159 - loss: 0.9243, acc: 98.06% / test_loss: 0.9305, test_acc: 97.46%
Epoch 160 - loss: 0.9247, acc: 98.03% / test_loss: 0.9303, test_acc: 97.45%
Epoch 161 - loss: 0.9239, acc: 98.09% / test_loss: 0.9303, test_acc: 97.43%
Epoch 162 - loss: 0.9251, acc: 97.96% / test_loss: 0.9301, test_acc: 97.49%
Epoch 163 - loss: 0.9253, acc: 97.96% / test_loss: 0.9321, test_acc: 97.25%
Epoch 164 - loss: 0.9247, acc: 98.03% / test_loss: 0.9309, test_acc: 97.39%
Epoch 165 - loss: 0.9241, acc: 98.07% / test_loss: 0.9296, test_acc: 97.51%
Epoch 166 - loss: 0.9239, acc: 98.09% / test_loss: 0.9307, test_acc: 97.41%
Epoch 167 - loss: 0.9247, acc: 98.02% / test_loss: 0.9311, test_acc: 97.37%
Epoch 168 - loss: 0.9235, acc: 98.14% / test_loss: 0.9295, test_acc: 97.53%
Epoch 169 - loss: 0.9245, acc: 98.03% / test_loss: 0.9314, test_acc: 97.31%
Epoch 170 - loss: 0.9252, acc: 97.96% / test_loss: 0.9324, test_acc: 97.27%
Epoch 171 - loss: 0.9233, acc: 98.15% / test_loss: 0.9303, test_acc: 97.46%
Epoch 172 - loss: 0.9230, acc: 98.17% / test_loss: 0.9303, test_acc: 97.46%
Epoch 173 - loss: 0.9224, acc: 98.23% / test_loss: 0.9296, test_acc: 97.51%
Epoch 174 - loss: 0.9228, acc: 98.18% / test_loss: 0.9314, test_acc: 97.30%
Epoch 175 - loss: 0.9236, acc: 98.10% / test_loss: 0.9302, test_acc: 97.44%
Epoch 176 - loss: 0.9227, acc: 98.23% / test_loss: 0.9334, test_acc: 97.12%
Epoch 177 - loss: 0.9224, acc: 98.25% / test_loss: 0.9311, test_acc: 97.35%
Epoch 178 - loss: 0.9228, acc: 98.20% / test_loss: 0.9296, test_acc: 97.50%
Epoch 179 - loss: 0.9217, acc: 98.32% / test_loss: 0.9319, test_acc: 97.27%
Epoch 180 - loss: 0.9228, acc: 98.20% / test_loss: 0.9303, test_acc: 97.49%
Epoch 181 - loss: 0.9234, acc: 98.14% / test_loss: 0.9321, test_acc: 97.27%
Epoch 182 - loss: 0.9224, acc: 98.26% / test_loss: 0.9279, test_acc: 97.69%
Epoch 183 - loss: 0.9215, acc: 98.33% / test_loss: 0.9311, test_acc: 97.37%
Epoch 184 - loss: 0.9213, acc: 98.35% / test_loss: 0.9284, test_acc: 97.64%
Epoch 185 - loss: 0.9202, acc: 98.46% / test_loss: 0.9279, test_acc: 97.67%
Epoch 186 - loss: 0.9206, acc: 98.41% / test_loss: 0.9340, test_acc: 97.06%
Epoch 187 - loss: 0.9213, acc: 98.36% / test_loss: 0.9264, test_acc: 97.85%
Epoch 188 - loss: 0.9204, acc: 98.46% / test_loss: 0.9269, test_acc: 97.79%
Epoch 189 - loss: 0.9208, acc: 98.44% / test_loss: 0.9263, test_acc: 97.86%
Epoch 190 - loss: 0.9193, acc: 98.58% / test_loss: 0.9259, test_acc: 97.94%
Epoch 191 - loss: 0.9199, acc: 98.50% / test_loss: 0.9297, test_acc: 97.52%
Epoch 192 - loss: 0.9196, acc: 98.54% / test_loss: 0.9260, test_acc: 97.88%
Epoch 193 - loss: 0.9186, acc: 98.63% / test_loss: 0.9251, test_acc: 97.95%
Epoch 194 - loss: 0.9191, acc: 98.55% / test_loss: 0.9259, test_acc: 97.90%
Epoch 195 - loss: 0.9194, acc: 98.54% / test_loss: 0.9273, test_acc: 97.74%
Epoch 196 - loss: 0.9189, acc: 98.61% / test_loss: 0.9266, test_acc: 97.77%
Epoch 197 - loss: 0.9189, acc: 98.59% / test_loss: 0.9270, test_acc: 97.81%
Epoch 198 - loss: 0.9188, acc: 98.61% / test_loss: 0.9240, test_acc: 98.08%
Epoch 199 - loss: 0.9183, acc: 98.65% / test_loss: 0.9239, test_acc: 98.11%
Epoch 200 - loss: 0.9175, acc: 98.75% / test_loss: 0.9244, test_acc: 98.01%
Epoch 201 - loss: 0.9202, acc: 98.44% / test_loss: 0.9242, test_acc: 98.06%
Epoch 202 - loss: 0.9176, acc: 98.75% / test_loss: 0.9251, test_acc: 97.96%
Epoch 203 - loss: 0.9173, acc: 98.75% / test_loss: 0.9256, test_acc: 97.94%
Epoch 204 - loss: 0.9170, acc: 98.78% / test_loss: 0.9227, test_acc: 98.17%
Epoch 205 - loss: 0.9170, acc: 98.78% / test_loss: 0.9232, test_acc: 98.14%
Epoch 206 - loss: 0.9173, acc: 98.78% / test_loss: 0.9237, test_acc: 98.09%
Epoch 207 - loss: 0.9184, acc: 98.66% / test_loss: 0.9234, test_acc: 98.14%
Epoch 208 - loss: 0.9195, acc: 98.55% / test_loss: 0.9238, test_acc: 98.08%
Epoch 209 - loss: 0.9183, acc: 98.65% / test_loss: 0.9285, test_acc: 97.62%
Epoch 210 - loss: 0.9175, acc: 98.78% / test_loss: 0.9253, test_acc: 97.96%
Epoch 211 - loss: 0.9158, acc: 98.92% / test_loss: 0.9252, test_acc: 97.95%
Epoch 212 - loss: 0.9166, acc: 98.82% / test_loss: 0.9251, test_acc: 97.98%
Epoch 213 - loss: 0.9173, acc: 98.76% / test_loss: 0.9238, test_acc: 98.07%
Epoch 214 - loss: 0.9180, acc: 98.65% / test_loss: 0.9226, test_acc: 98.24%
Epoch 215 - loss: 0.9167, acc: 98.82% / test_loss: 0.9258, test_acc: 97.89%
Epoch 216 - loss: 0.9178, acc: 98.72% / test_loss: 0.9242, test_acc: 98.04%
Epoch 217 - loss: 0.9161, acc: 98.88% / test_loss: 0.9241, test_acc: 98.07%
Epoch 218 - loss: 0.9153, acc: 98.96% / test_loss: 0.9225, test_acc: 98.28%
Epoch 219 - loss: 0.9162, acc: 98.85% / test_loss: 0.9223, test_acc: 98.26%
Epoch 220 - loss: 0.9171, acc: 98.80% / test_loss: 0.9239, test_acc: 98.09%
Epoch 221 - loss: 0.9169, acc: 98.81% / test_loss: 0.9243, test_acc: 98.05%
Epoch 222 - loss: 0.9164, acc: 98.85% / test_loss: 0.9225, test_acc: 98.21%
Epoch 223 - loss: 0.9156, acc: 98.91% / test_loss: 0.9232, test_acc: 98.10%
Epoch 224 - loss: 0.9152, acc: 98.97% / test_loss: 0.9243, test_acc: 98.07%
Epoch 225 - loss: 0.9154, acc: 98.95% / test_loss: 0.9226, test_acc: 98.21%
Epoch 226 - loss: 0.9158, acc: 98.91% / test_loss: 0.9276, test_acc: 97.71%
Epoch 227 - loss: 0.9196, acc: 98.53% / test_loss: 0.9239, test_acc: 98.14%
Epoch 228 - loss: 0.9184, acc: 98.66% / test_loss: 0.9245, test_acc: 98.05%
Epoch 229 - loss: 0.9171, acc: 98.81% / test_loss: 0.9224, test_acc: 98.28%
Epoch 230 - loss: 0.9153, acc: 98.97% / test_loss: 0.9228, test_acc: 98.19%
Epoch 231 - loss: 0.9149, acc: 99.01% / test_loss: 0.9227, test_acc: 98.16%
Epoch 232 - loss: 0.9156, acc: 98.94% / test_loss: 0.9254, test_acc: 97.94%
Epoch 233 - loss: 0.9155, acc: 98.96% / test_loss: 0.9224, test_acc: 98.23%
Epoch 234 - loss: 0.9175, acc: 98.75% / test_loss: 0.9245, test_acc: 98.05%
Epoch 235 - loss: 0.9159, acc: 98.91% / test_loss: 0.9217, test_acc: 98.30%
Epoch 236 - loss: 0.9150, acc: 98.96% / test_loss: 0.9238, test_acc: 98.10%
Epoch 237 - loss: 0.9158, acc: 98.91% / test_loss: 0.9217, test_acc: 98.35%
Epoch 238 - loss: 0.9153, acc: 98.98% / test_loss: 0.9245, test_acc: 98.04%
Epoch 239 - loss: 0.9147, acc: 99.03% / test_loss: 0.9215, test_acc: 98.32%
Epoch 240 - loss: 0.9153, acc: 98.93% / test_loss: 0.9226, test_acc: 98.20%
Epoch 241 - loss: 0.9152, acc: 98.98% / test_loss: 0.9250, test_acc: 97.98%
Epoch 242 - loss: 0.9161, acc: 98.85% / test_loss: 0.9235, test_acc: 98.14%
Epoch 243 - loss: 0.9155, acc: 98.94% / test_loss: 0.9223, test_acc: 98.23%
Epoch 244 - loss: 0.9156, acc: 98.92% / test_loss: 0.9221, test_acc: 98.26%
Epoch 245 - loss: 0.9150, acc: 99.00% / test_loss: 0.9221, test_acc: 98.32%
Epoch 246 - loss: 0.9159, acc: 98.91% / test_loss: 0.9209, test_acc: 98.39%
Epoch 247 - loss: 0.9144, acc: 99.05% / test_loss: 0.9214, test_acc: 98.36%
Epoch 248 - loss: 0.9154, acc: 98.93% / test_loss: 0.9226, test_acc: 98.23%
Epoch 249 - loss: 0.9151, acc: 98.98% / test_loss: 0.9216, test_acc: 98.32%
Epoch 250 - loss: 0.9147, acc: 99.01% / test_loss: 0.9226, test_acc: 98.25%
Epoch 251 - loss: 0.9149, acc: 98.97% / test_loss: 0.9261, test_acc: 97.88%
Epoch 252 - loss: 0.9151, acc: 98.97% / test_loss: 0.9230, test_acc: 98.17%
Epoch 253 - loss: 0.9162, acc: 98.88% / test_loss: 0.9220, test_acc: 98.25%
Epoch 254 - loss: 0.9173, acc: 98.75% / test_loss: 0.9228, test_acc: 98.23%
Epoch 255 - loss: 0.9155, acc: 98.94% / test_loss: 0.9217, test_acc: 98.32%
Epoch 256 - loss: 0.9153, acc: 98.97% / test_loss: 0.9225, test_acc: 98.25%
Epoch 257 - loss: 0.9154, acc: 98.96% / test_loss: 0.9230, test_acc: 98.17%
Epoch 258 - loss: 0.9145, acc: 99.03% / test_loss: 0.9239, test_acc: 98.09%
Epoch 259 - loss: 0.9147, acc: 99.01% / test_loss: 0.9254, test_acc: 97.97%
Epoch 260 - loss: 0.9157, acc: 98.91% / test_loss: 0.9229, test_acc: 98.20%
Epoch 261 - loss: 0.9164, acc: 98.84% / test_loss: 0.9222, test_acc: 98.29%
Epoch 262 - loss: 0.9142, acc: 99.05% / test_loss: 0.9217, test_acc: 98.30%
Epoch 263 - loss: 0.9145, acc: 99.04% / test_loss: 0.9232, test_acc: 98.15%
Epoch 264 - loss: 0.9151, acc: 98.97% / test_loss: 0.9231, test_acc: 98.16%
Epoch 265 - loss: 0.9144, acc: 99.06% / test_loss: 0.9214, test_acc: 98.35%
Epoch 266 - loss: 0.9121, acc: 99.31% / test_loss: 0.9210, test_acc: 98.42%
Epoch 267 - loss: 0.9112, acc: 99.38% / test_loss: 0.9179, test_acc: 98.70%
Epoch 268 - loss: 0.9120, acc: 99.28% / test_loss: 0.9184, test_acc: 98.63%
Epoch 269 - loss: 0.9111, acc: 99.38% / test_loss: 0.9184, test_acc: 98.65%
Epoch 270 - loss: 0.9105, acc: 99.45% / test_loss: 0.9176, test_acc: 98.69%
Epoch 271 - loss: 0.9117, acc: 99.32% / test_loss: 0.9238, test_acc: 98.10%
Epoch 272 - loss: 0.9122, acc: 99.27% / test_loss: 0.9194, test_acc: 98.57%
Epoch 273 - loss: 0.9123, acc: 99.25% / test_loss: 0.9173, test_acc: 98.77%
Epoch 274 - loss: 0.9115, acc: 99.34% / test_loss: 0.9185, test_acc: 98.62%
Epoch 275 - loss: 0.9115, acc: 99.34% / test_loss: 0.9202, test_acc: 98.47%
Epoch 276 - loss: 0.9108, acc: 99.43% / test_loss: 0.9182, test_acc: 98.66%
Epoch 277 - loss: 0.9110, acc: 99.38% / test_loss: 0.9178, test_acc: 98.71%
Epoch 278 - loss: 0.9109, acc: 99.41% / test_loss: 0.9188, test_acc: 98.62%
Epoch 279 - loss: 0.9117, acc: 99.33% / test_loss: 0.9187, test_acc: 98.59%
Epoch 280 - loss: 0.9112, acc: 99.38% / test_loss: 0.9178, test_acc: 98.72%
Epoch 281 - loss: 0.9134, acc: 99.17% / test_loss: 0.9195, test_acc: 98.51%
Epoch 282 - loss: 0.9112, acc: 99.37% / test_loss: 0.9193, test_acc: 98.55%
Epoch 283 - loss: 0.9114, acc: 99.37% / test_loss: 0.9193, test_acc: 98.56%
Epoch 284 - loss: 0.9113, acc: 99.35% / test_loss: 0.9181, test_acc: 98.68%
Epoch 285 - loss: 0.9109, acc: 99.40% / test_loss: 0.9175, test_acc: 98.73%
Epoch 286 - loss: 0.9104, acc: 99.45% / test_loss: 0.9178, test_acc: 98.67%
Epoch 287 - loss: 0.9114, acc: 99.34% / test_loss: 0.9201, test_acc: 98.46%
Epoch 288 - loss: 0.9128, acc: 99.19% / test_loss: 0.9169, test_acc: 98.80%
Epoch 289 - loss: 0.9110, acc: 99.39% / test_loss: 0.9185, test_acc: 98.62%
Epoch 290 - loss: 0.9104, acc: 99.45% / test_loss: 0.9183, test_acc: 98.66%
Epoch 291 - loss: 0.9106, acc: 99.42% / test_loss: 0.9183, test_acc: 98.63%
Epoch 292 - loss: 0.9117, acc: 99.31% / test_loss: 0.9190, test_acc: 98.57%
Epoch 293 - loss: 0.9116, acc: 99.32% / test_loss: 0.9233, test_acc: 98.15%
Epoch 294 - loss: 0.9106, acc: 99.43% / test_loss: 0.9177, test_acc: 98.70%
Epoch 295 - loss: 0.9112, acc: 99.37% / test_loss: 0.9194, test_acc: 98.54%
Epoch 296 - loss: 0.9112, acc: 99.36% / test_loss: 0.9171, test_acc: 98.76%
Epoch 297 - loss: 0.9100, acc: 99.49% / test_loss: 0.9168, test_acc: 98.81%
Epoch 298 - loss: 0.9105, acc: 99.45% / test_loss: 0.9177, test_acc: 98.72%
Epoch 299 - loss: 0.9117, acc: 99.32% / test_loss: 0.9199, test_acc: 98.48%
Epoch 300 - loss: 0.9109, acc: 99.40% / test_loss: 0.9168, test_acc: 98.80%
Epoch 301 - loss: 0.9112, acc: 99.39% / test_loss: 0.9218, test_acc: 98.32%
Epoch 302 - loss: 0.9111, acc: 99.37% / test_loss: 0.9182, test_acc: 98.66%
Epoch 303 - loss: 0.9115, acc: 99.33% / test_loss: 0.9224, test_acc: 98.27%
Epoch 304 - loss: 0.9108, acc: 99.41% / test_loss: 0.9206, test_acc: 98.43%
Epoch 305 - loss: 0.9116, acc: 99.32% / test_loss: 0.9189, test_acc: 98.60%
Epoch 306 - loss: 0.9117, acc: 99.30% / test_loss: 0.9171, test_acc: 98.78%
Epoch 307 - loss: 0.9101, acc: 99.49% / test_loss: 0.9180, test_acc: 98.68%
Epoch 308 - loss: 0.9110, acc: 99.37% / test_loss: 0.9171, test_acc: 98.78%
Epoch 309 - loss: 0.9108, acc: 99.39% / test_loss: 0.9236, test_acc: 98.13%
Epoch 310 - loss: 0.9129, acc: 99.16% / test_loss: 0.9182, test_acc: 98.63%
Epoch 311 - loss: 0.9102, acc: 99.46% / test_loss: 0.9191, test_acc: 98.60%
Epoch 312 - loss: 0.9105, acc: 99.43% / test_loss: 0.9188, test_acc: 98.60%
Epoch 313 - loss: 0.9110, acc: 99.41% / test_loss: 0.9188, test_acc: 98.61%
Epoch 314 - loss: 0.9108, acc: 99.41% / test_loss: 0.9173, test_acc: 98.73%
Epoch 315 - loss: 0.9113, acc: 99.34% / test_loss: 0.9182, test_acc: 98.66%
Epoch 316 - loss: 0.9135, acc: 99.12% / test_loss: 0.9192, test_acc: 98.55%
Epoch 317 - loss: 0.9112, acc: 99.37% / test_loss: 0.9190, test_acc: 98.60%
Epoch 318 - loss: 0.9110, acc: 99.39% / test_loss: 0.9205, test_acc: 98.44%
Epoch 319 - loss: 0.9105, acc: 99.43% / test_loss: 0.9169, test_acc: 98.76%
Epoch 320 - loss: 0.9114, acc: 99.35% / test_loss: 0.9180, test_acc: 98.65%
Epoch 321 - loss: 0.9113, acc: 99.36% / test_loss: 0.9178, test_acc: 98.69%
Epoch 322 - loss: 0.9104, acc: 99.44% / test_loss: 0.9184, test_acc: 98.63%
Epoch 323 - loss: 0.9104, acc: 99.44% / test_loss: 0.9226, test_acc: 98.24%
Epoch 324 - loss: 0.9105, acc: 99.43% / test_loss: 0.9178, test_acc: 98.66%
Epoch 325 - loss: 0.9102, acc: 99.47% / test_loss: 0.9187, test_acc: 98.63%
Epoch 326 - loss: 0.9115, acc: 99.34% / test_loss: 0.9188, test_acc: 98.61%
Epoch 327 - loss: 0.9111, acc: 99.38% / test_loss: 0.9188, test_acc: 98.63%
Epoch 328 - loss: 0.9103, acc: 99.45% / test_loss: 0.9204, test_acc: 98.41%
Epoch 329 - loss: 0.9113, acc: 99.35% / test_loss: 0.9194, test_acc: 98.56%
Epoch 330 - loss: 0.9108, acc: 99.40% / test_loss: 0.9205, test_acc: 98.44%
Epoch 331 - loss: 0.9128, acc: 99.20% / test_loss: 0.9180, test_acc: 98.64%
Epoch 332 - loss: 0.9116, acc: 99.31% / test_loss: 0.9179, test_acc: 98.72%
Epoch 333 - loss: 0.9108, acc: 99.41% / test_loss: 0.9177, test_acc: 98.68%
Epoch 334 - loss: 0.9107, acc: 99.42% / test_loss: 0.9176, test_acc: 98.72%
Epoch 335 - loss: 0.9106, acc: 99.43% / test_loss: 0.9160, test_acc: 98.88%
Epoch 336 - loss: 0.9110, acc: 99.39% / test_loss: 0.9181, test_acc: 98.66%
Epoch 337 - loss: 0.9116, acc: 99.31% / test_loss: 0.9174, test_acc: 98.74%
Epoch 338 - loss: 0.9123, acc: 99.25% / test_loss: 0.9255, test_acc: 97.92%
Epoch 339 - loss: 0.9104, acc: 99.46% / test_loss: 0.9177, test_acc: 98.75%
Epoch 340 - loss: 0.9107, acc: 99.41% / test_loss: 0.9178, test_acc: 98.69%
Epoch 341 - loss: 0.9100, acc: 99.48% / test_loss: 0.9164, test_acc: 98.83%
Epoch 342 - loss: 0.9105, acc: 99.42% / test_loss: 0.9192, test_acc: 98.54%
Epoch 343 - loss: 0.9110, acc: 99.38% / test_loss: 0.9201, test_acc: 98.45%
Epoch 344 - loss: 0.9121, acc: 99.27% / test_loss: 0.9161, test_acc: 98.88%
Epoch 345 - loss: 0.9109, acc: 99.39% / test_loss: 0.9176, test_acc: 98.69%
Epoch 346 - loss: 0.9102, acc: 99.48% / test_loss: 0.9170, test_acc: 98.78%
Epoch 347 - loss: 0.9105, acc: 99.43% / test_loss: 0.9179, test_acc: 98.64%
Epoch 348 - loss: 0.9102, acc: 99.47% / test_loss: 0.9280, test_acc: 97.67%
Epoch 349 - loss: 0.9106, acc: 99.44% / test_loss: 0.9169, test_acc: 98.78%
Epoch 350 - loss: 0.9109, acc: 99.40% / test_loss: 0.9206, test_acc: 98.42%
Epoch 351 - loss: 0.9125, acc: 99.21% / test_loss: 0.9183, test_acc: 98.63%
Epoch 352 - loss: 0.9102, acc: 99.46% / test_loss: 0.9174, test_acc: 98.73%
Epoch 353 - loss: 0.9097, acc: 99.52% / test_loss: 0.9180, test_acc: 98.68%
Epoch 354 - loss: 0.9097, acc: 99.52% / test_loss: 0.9166, test_acc: 98.80%
Epoch 355 - loss: 0.9094, acc: 99.54% / test_loss: 0.9169, test_acc: 98.80%
Epoch 356 - loss: 0.9094, acc: 99.55% / test_loss: 0.9167, test_acc: 98.81%
Epoch 357 - loss: 0.9094, acc: 99.55% / test_loss: 0.9167, test_acc: 98.81%
Epoch 358 - loss: 0.9118, acc: 99.30% / test_loss: 0.9208, test_acc: 98.41%
Epoch 359 - loss: 0.9112, acc: 99.36% / test_loss: 0.9207, test_acc: 98.40%
Epoch 360 - loss: 0.9109, acc: 99.40% / test_loss: 0.9179, test_acc: 98.69%
Epoch 361 - loss: 0.9105, acc: 99.43% / test_loss: 0.9168, test_acc: 98.78%
Epoch 362 - loss: 0.9098, acc: 99.52% / test_loss: 0.9174, test_acc: 98.72%
Epoch 363 - loss: 0.9098, acc: 99.50% / test_loss: 0.9200, test_acc: 98.47%
Epoch 364 - loss: 0.9097, acc: 99.51% / test_loss: 0.9162, test_acc: 98.86%
Epoch 365 - loss: 0.9094, acc: 99.55% / test_loss: 0.9185, test_acc: 98.61%
Epoch 366 - loss: 0.9096, acc: 99.53% / test_loss: 0.9188, test_acc: 98.60%
Epoch 367 - loss: 0.9109, acc: 99.40% / test_loss: 0.9180, test_acc: 98.68%
Epoch 368 - loss: 0.9125, acc: 99.22% / test_loss: 0.9182, test_acc: 98.69%
Epoch 369 - loss: 0.9115, acc: 99.35% / test_loss: 0.9171, test_acc: 98.76%
Epoch 370 - loss: 0.9107, acc: 99.42% / test_loss: 0.9181, test_acc: 98.68%
Epoch 371 - loss: 0.9104, acc: 99.44% / test_loss: 0.9186, test_acc: 98.60%
Epoch 372 - loss: 0.9107, acc: 99.42% / test_loss: 0.9178, test_acc: 98.69%
Epoch 373 - loss: 0.9108, acc: 99.39% / test_loss: 0.9181, test_acc: 98.66%
Epoch 374 - loss: 0.9101, acc: 99.48% / test_loss: 0.9176, test_acc: 98.72%
Epoch 375 - loss: 0.9101, acc: 99.48% / test_loss: 0.9170, test_acc: 98.78%
Epoch 376 - loss: 0.9096, acc: 99.52% / test_loss: 0.9169, test_acc: 98.80%
Epoch 377 - loss: 0.9130, acc: 99.20% / test_loss: 0.9189, test_acc: 98.55%
Epoch 378 - loss: 0.9109, acc: 99.40% / test_loss: 0.9213, test_acc: 98.35%
Epoch 379 - loss: 0.9106, acc: 99.42% / test_loss: 0.9174, test_acc: 98.76%
Epoch 380 - loss: 0.9121, acc: 99.28% / test_loss: 0.9179, test_acc: 98.72%
Epoch 381 - loss: 0.9103, acc: 99.46% / test_loss: 0.9173, test_acc: 98.74%
Epoch 382 - loss: 0.9099, acc: 99.49% / test_loss: 0.9181, test_acc: 98.67%
Epoch 383 - loss: 0.9107, acc: 99.42% / test_loss: 0.9174, test_acc: 98.74%
Epoch 384 - loss: 0.9112, acc: 99.37% / test_loss: 0.9175, test_acc: 98.73%
Epoch 385 - loss: 0.9098, acc: 99.49% / test_loss: 0.9169, test_acc: 98.78%
Epoch 386 - loss: 0.9103, acc: 99.46% / test_loss: 0.9172, test_acc: 98.75%
Epoch 387 - loss: 0.9121, acc: 99.27% / test_loss: 0.9180, test_acc: 98.68%
Epoch 388 - loss: 0.9109, acc: 99.40% / test_loss: 0.9204, test_acc: 98.42%
Epoch 389 - loss: 0.9102, acc: 99.49% / test_loss: 0.9172, test_acc: 98.76%
Epoch 390 - loss: 0.9098, acc: 99.51% / test_loss: 0.9178, test_acc: 98.71%
Epoch 391 - loss: 0.9107, acc: 99.41% / test_loss: 0.9197, test_acc: 98.51%
Epoch 392 - loss: 0.9098, acc: 99.50% / test_loss: 0.9175, test_acc: 98.75%
Epoch 393 - loss: 0.9101, acc: 99.46% / test_loss: 0.9178, test_acc: 98.67%
Epoch 394 - loss: 0.9114, acc: 99.35% / test_loss: 0.9225, test_acc: 98.23%
Epoch 395 - loss: 0.9107, acc: 99.41% / test_loss: 0.9183, test_acc: 98.63%
Epoch 396 - loss: 0.9104, acc: 99.45% / test_loss: 0.9177, test_acc: 98.68%
Epoch 397 - loss: 0.9100, acc: 99.49% / test_loss: 0.9169, test_acc: 98.80%
Epoch 398 - loss: 0.9099, acc: 99.49% / test_loss: 0.9179, test_acc: 98.69%
Epoch 399 - loss: 0.9100, acc: 99.49% / test_loss: 0.9175, test_acc: 98.72%
Epoch 400 - loss: 0.9106, acc: 99.42% / test_loss: 0.9170, test_acc: 98.76%
Best test accuracy 98.88% in epoch 335.
----------------------------------------------------------------------------------------------------
###Markdown
Print the best test accuracy of each run
###Code
for i, a in enumerate(best_test_accs):
print('Run {}: {:.2f}%'.format(i+1, a*100))
###Output
Run 1: 98.46%
Run 2: 98.47%
Run 3: 98.41%
Run 4: 98.81%
Run 5: 98.37%
Run 6: 98.37%
Run 7: 98.51%
Run 8: 98.42%
Run 9: 98.50%
Run 10: 98.88%
|
Data science/Analise de dados/exercicio python/Exemplo Jupyter/Untitled.ipynb | ###Markdown
Projeto de análise de dados
###Code
import pandas as pd
titanic = pd.read_csv("titanic.csv", sep="\t")
titanic.head()
titanic
###Output
_____no_output_____ |
Lesson 08 - Linear Regression II.ipynb | ###Markdown
Lesson 08 - Linear Regression IIAustin Derrow-Pinion
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
%matplotlib inline
df = pd.read_csv('./Data/wine_quality_white.csv', sep=';')
df.head()
df.describe()
# 11 features, 1 to predict. 4898 samples
df.shape
# store only the feature values
features = df.values[:,:11]
features.shape
# store only the quality values
quality = df.quality.values.astype(np.float64)
quality.shape
# scale all values
features_s = scale(features)
quality_s = scale(quality)
# Tensorflow
# y = wx + b
x = tf.constant(features_s, dtype='float32', shape=[4898, 11])
y = tf.constant(quality_s, dtype='float32', shape=[4898, 1])
# weights
w = tf.Variable(tf.truncated_normal([11, 1], stddev = 1))
# biases
b = tf.Variable(0.0)
# error loss function
MSE = tf.reduce_mean(tf.square(tf.matmul(x, w) + b - y))
# define optimizer
STEPSIZE = 0.1
optimizer = tf.train.GradientDescentOptimizer(STEPSIZE).minimize(MSE)
# prediction with current weights
y_pred = tf.matmul(x, w) + b
# init all variables
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
MAXSTEPS = 1000
for step in range(MAXSTEPS + 1):
(_, mse, w0, b0) = sess.run([optimizer, MSE, w, b])
if (step % 100) == 0:
print('step = %-5d MSE = %-10f' % (step, mse))
print('done!')
# make predictions
quality_s_pred = sess.run(y_pred)
# unstandardize weight predictions
quality_pred = quality_s_pred * quality.std() + quality.mean()
# create new prediction column in dataframe df
df['quality_pred'] = quality_pred
df.head()
# compute mean square error
mse = ((quality - quality_pred.flatten()) ** 2).mean()
print('MSE = ', mse)
print('RMSE = ', np.sqrt(mse))
###Output
_____no_output_____
###Markdown
Check using sklearn
###Code
from sklearn.linear_model import LinearRegression
LR = LinearRegression()
LR.fit(features, quality)
quality_pred0 = LR.predict(features)
# compute mean square error
mse = ((quality - quality_pred0.flatten()) ** 2).mean()
print('MSE = ', mse)
print('RMSE = ', np.sqrt(mse))
plt.scatter(quality, quality_pred, alpha=0.02)
###Output
_____no_output_____ |
scientific_computation/sympy.ipynb | ###Markdown
[SymPy](http://www.sympy.org/en/index.html)A Python library for symbolic mathematics. Among others things, it provides:1. [Symbolic simplification](http://docs.sympy.org/latest/tutorial/simplification.html).2. [Calculus (derivatives, integrals, limits, and series expansions)](http://docs.sympy.org/latest/tutorial/calculus.html).3. [Algebraic solver](http://docs.sympy.org/latest/tutorial/solvers.html).4. [Matrix operations](http://docs.sympy.org/latest/tutorial/matrices.html).5. [Combinatorics](http://docs.sympy.org/latest/modules/combinatorics/index.html)6. [Cryptography](http://docs.sympy.org/latest/modules/crypto.html). Table of Contents1 Example Example
###Code
try:
from sympy import init_session
except:
!pip3 install sympy
from sympy import init_session
init_session(use_latex='matplotlib')
# https://github.com/AeroPython/Taller-Aeropython-PyConEs16
expr = cos(x)**2 + sin(x)**2
expr
simplify(expr)
expr.subs(x, y**2)
expr = (x + y) ** 2
expr
expr = expr.expand()
expr
expr = expr.factor()
expr
expr = expr.integrate(x)
expr
expr = expr.diff(x)
expr
###Output
_____no_output_____ |
notebooks/Demo6.ipynb | ###Markdown
Example 02: CIFAR-10 Demo
###Code
import sys
sys.path.append('./../')
import matplotlib
%matplotlib inline
import visualisation
###Output
_____no_output_____
###Markdown
(i) Train an ANT on the CIFAR-10 image recognition datasetFrom the code directory, run the following command to train an ANT:```bashpython tree.py --dataset cifar10 \ dataset --experiment demo --subexperiment ant_cifar10 \ experiment names --batch-size 512 --epochs_patience 5 \ training --epochs_node 100 --epochs_finetune 200 \ --scheduler step_lr --augmentation_on \ -t_ver 5 -t_k 3 -t_ngf 96 \ transformer module config -r_ver 3 -r_ngf 48 -r_k 3 \ router module config -s_ver 6 \ solver module config --maxdepth 10 --batch_norm \ other model config --visualise_split --num_workers 0 --seed 0 miscellaneous ```It takes less than 3 hours on a single Titan X GPU. (ii) Plot classification accuracyThe dotted lines correspond to the epoch number at which the refinement phase started.
###Code
exp_dir = './../experiments/iot/demo6/'
models_list = ['ant_iot']
records_file_list = [exp_dir + model_name + '/checkpoints/records.json' for model_name in models_list]
model_files = [exp_dir + model_name + '/checkpoints/model.pth' for model_name in models_list]
visualisation.plot_performance(records_file_list, models_list, ymax = 25.0, figsize=(10,7), finetune_position=True)
visualisation.plot_accuracy(records_file_list, models_list, figsize=(10,7), ymin=0, ymax=105, finetune_position=True)
###Output
13
13
ant_iot: test accuracy = 55.8983666062
###Markdown
(iii) Compute model size
###Code
_ = visualisation.compute_number_of_params(model_files, models_list, is_gpu=False)
###Output
Model: ant_iot
Number of parameters summary:
Total: 87082
Max per branch: 87082
Min per branch: 87082
Average per branch: 87082.0
###Markdown
(iv) Visualise the tree structure
###Code
fig_dir = exp_dir + 'ant_cifar10' + '/figures/'
visualisation.visualise_treestructures(fig_dir, figsize=(10,20))
###Output
_____no_output_____ |
03 - Working with NumPy/notebooks/04-Setting-and-Slicing-Array-Elements.ipynb | ###Markdown
Setting and Slicing Array Elements
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Array Slicing
###Code
# Create an array
A = np.array([1, 2, 3, 4, 5, 6, 7, 8])
print(A)
###Output
[1 2 3 4 5 6 7 8]
###Markdown
Positive IndexingSelect `nth` element of an array by using `var[position]`
###Code
# Select 0th element of `A`
A[0]
# Select 1st element of `A`
A[1]
###Output
_____no_output_____
###Markdown
Negative Indexing
###Code
# Select last element of `A`
A[-1]
A[-2]
###Output
_____no_output_____
###Markdown
Extract a Portion of a Sequence by specifying a lower and upper bound. The lower bound element is `included`, but the upper-bound element is `not included`. Mathematically: [lower, upper]. The stop value specifies the stride between elements.
###Code
B = np.array([1, 2, 3, 4, 5, 6, 7])
B
# indices: A[i:j] => i: j-1
B[1:3]
# negative indices work also
B[1:-2]
B[-4:3]
###Output
_____no_output_____
###Markdown
Omitted boundaries are assumed to be the beginning (or end) of the list
###Code
# grab first theree elements: first n elements A[: n]
B[:3]
B[3:]
# grab last two elements
B[-2:]
# grab every other elements
B[::2]
###Output
_____no_output_____
###Markdown
2D Array Slicing
###Code
arr = np.array([[1, 2, 3], [4, 5, 6]])
print(arr)
# grab 1st row and 1st element
arr[0, 0]
# same as above
arr[0, 2]
# negative indexing
arr[0, -1]
# select all rows and last element
arr[:, 2]
###Output
_____no_output_____ |
notebooks/eda_chexpert.ipynb | ###Markdown
Take a Quick Look at the Data Structure
###Code
train_df.head()
train_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 223414 entries, 0 to 223413
Data columns (total 19 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Path 223414 non-null object
1 Sex 223414 non-null object
2 Age 223414 non-null int64
3 Frontal/Lateral 223414 non-null object
4 AP/PA 191027 non-null object
5 No Finding 22381 non-null float64
6 Enlarged Cardiomediastinum 44839 non-null float64
7 Cardiomegaly 46203 non-null float64
8 Lung Opacity 117778 non-null float64
9 Lung Lesion 11944 non-null float64
10 Edema 85956 non-null float64
11 Consolidation 70622 non-null float64
12 Pneumonia 27608 non-null float64
13 Atelectasis 68443 non-null float64
14 Pneumothorax 78934 non-null float64
15 Pleural Effusion 133211 non-null float64
16 Pleural Other 6492 non-null float64
17 Fracture 12194 non-null float64
18 Support Devices 123217 non-null float64
dtypes: float64(14), int64(1), object(4)
memory usage: 32.4+ MB
###Markdown
check non-numeric clumn values
###Code
for (column, t) in zip(train_df.columns, train_df.dtypes):
if column != 'Path' and t == object:
print(column)
a = train_df[column].unique()
print(a)
for fl in ['Frontal', 'Lateral']:
print(fl)
print(train_df[train_df['Frontal/Lateral']==fl]['AP/PA'].unique())
def encode_categorial_features(df):
df['Frontal/Lateral'] = df['Frontal/Lateral'].map({'Frontal': 1, 'Lateral': 0})
df['Sex'] = df['Sex'].map({'Male': 1, 'Female': 0, 'Unknown': np.nan})
df['AP/PA'] = df['AP/PA'].map({'AP': 1, 'PA': 2, 'LL': 3, 'RL':4})
return df
train_df = encode_categorial_features(train_df)
test_df = encode_categorial_features(test_df)
train_df.describe()
train_df['Age'].hist()
###Output
_____no_output_____
###Markdown
Looking for Corelations
###Code
obs_columns = train_df.columns[5::]
obs_columns
corr = train_df.corr()
# drop No Finding because it only has 1 calsses
corr = corr.drop("No Finding", axis=0).drop("No Finding", axis=1)
fig = plt.figure(figsize=(30, 15))
sns.set(font_scale=3)
xticklabels = [
'Sex', 'Age', 'Frontal/Lateral', 'AP/PA', 'Enlarged Card', 'Cardiomegaly',
'Lung Opacity', 'Lung Lesion', 'Edema', 'Consolidation', 'Pneumonia',
'Atelectasis', 'Pneumothorax', 'Pleural Effusion', 'Pleural Other',
'Fracture', 'Support Devices'
]
ax = sns.heatmap(corr.iloc[4:, :],
xticklabels=xticklabels,
vmin=-1,
vmax=1,
cmap='PuOr',
annot=True,
fmt=".2f",
annot_kws={"fontsize": "small"})
###Output
_____no_output_____
###Markdown
Classes weight
###Code
obs_df = train_df[obs_columns]
value_count_df = obs_df.apply(lambda c: c.value_counts()).T
value_count_df
weight_df = value_count_df[1.0] / (value_count_df[1.0] + value_count_df[0])
weight_df.apply(lambda x: round(x, 2)).to_frame()
def calculate_classes_distribution(df, sort=True):
df_size = len(df)
classes = [('Positive(%)', 1.0), ("Uncertain(%)", -1.0), ('Negative(%)', 0)]
result = {}
for o in obs_columns:
result[o] = {}
for n, v in classes:
count = df[df[o]==v][o].count()
result[o][n] = (count, round(count/df_size * 100, 2))
result_df = pd.DataFrame(result)
if sort:
result_df = result_df.sort_values(by='Positive(%)', axis=1, ascending=False).applymap(lambda v: f"{v[0]}({v[1]})")
return result_df.T
print(len(train_df))
calculate_classes_distribution(train_df, sort=True)
print(len(test_df))
calculate_classes_distribution(test_df, sort=True)
###Output
234
|
solutions/03-combiner.ipynb | ###Markdown
Analyse et visualisation de données avec Python Combiner des DataFrames avec PandasQuestions* Peut-on travaillers avec plusieurs sources de données?* Comment combiner les données de deux DataFrames?Objectifs* Combiner les données de plusieurs fichiers en utilisant `concat` et `merge`.* Combiner deux DataFrames utilisant un identifiant commun. Charger nos données
###Code
# Charger le module pandas
import pandas as pd
# Charger les données
valeurs = pd.read_csv("../data/valeurs.csv")
###Output
_____no_output_____
###Markdown
Concaténer des DataFrames
###Code
# Sélectionner les 10 premiers enregistrements
premiers10 = valeurs.head(10)
premiers10
# Sélectionner les 10 derniers enregistrements
derniers10 = valeurs.tail(10)
derniers10
# Concaténer les dataframes verticalement
liste_df = [premiers10, derniers10]
vertical = pd.concat(liste_df, axis=0)
vertical
# Réinitaliser l'index du dataframe
# L'option drop=True évite l'ajout d'une colonne avec l'ancien index
vertical = vertical.reset_index(drop=True)
vertical
###Output
_____no_output_____
###Markdown
Écrire le résultat dans un fichier CSV
###Code
fichier_csv = 'vertical.csv'
# Omettre l'index
vertical.to_csv(fichier_csv, index=False)
# Charger le nouveau fichier CSV
vertical2 = pd.read_csv(fichier_csv)
vertical2
###Output
_____no_output_____
###Markdown
Exercice - Concaténer des DataFrames* Dans `valeurs`, sélectionnez individuellement les enregistrements des années 2016 et 2017.* Concaténez les deux DataFrames verticalement.* Créez un "`line`-plot" montrant la moyenne des `Market_Cap_USD` par mois pour chaque année (soit une ligne par année).* Sauvegardez le tableau des moyennes dans un fichier CSV et le recharger.
###Code
# Obtenir les données pour chaque année
valeurs2016 = valeurs[valeurs['year'] == 2016]
valeurs2017 = valeurs[valeurs['year'] == 2017]
# Concatener verticalement
liste_df = [valeurs2016, valeurs2017]
valeurs16_17 = pd.concat(liste_df, axis=0)
# Calculer la moyenne par année pour chaque mois
moyennes16_17 = valeurs16_17.groupby(['month', 'year'])['Market_Cap_USD'].mean()
moyennes16_17 = moyennes16_17.unstack()
moyennes16_17
# Créer le graphique
moyennes16_17.plot(kind="line")
# Écrire dans un fichier - garder l'index "month" cette fois-ci
moyennes16_17.to_csv("moyennes16_17.csv")
# Relire les données, fournir le nom de l'index
pd.read_csv("moyennes16_17.csv", index_col='month')
###Output
_____no_output_____
###Markdown
Joindre deux DataFramesNous allons utiliser les enregistrements de décembre 2015 pour lesquels le Market_Cap_USD est d'au moins 250 000$.
###Code
valeurs_250k = valeurs[(valeurs['year'] == 2015) &
(valeurs['month'] == 12) &
(valeurs['Market_Cap_USD'] >= 250000)]
# Garder seulement quelques colonnes
valeurs_250k = valeurs_250k[['company_ID',
'Market_Cap_USD',
'Total_Return_USD',
'Assets_to_Equity',
'leverage_category']]
valeurs_250k
# Importons les informations des compagnies
compagnies = pd.read_csv('../data/compagnies.csv')
compagnies
###Output
_____no_output_____
###Markdown
Identifier les clés de jonction
###Code
# Afficher les colonnes
valeurs_250k.columns
# Afficher les autres colonnes
compagnies.columns
###Output
_____no_output_____
###Markdown
Une intersection ou "inner join" ![Inner join of tables A and B](https://datacarpentry.org/python-ecology-lesson/fig/inner-join.png)
###Code
# Effectuer l'intersection des compagnies et des valeurs sélectionnées
cle = 'company_ID'
intersection = pd.merge(left=compagnies, right=valeurs_250k,
left_on=cle, right_on=cle)
# Quelle est la taille de la jonction?
intersection.shape
intersection
###Output
_____no_output_____
###Markdown
Jonction de gauche ![Left join of tables A and B](https://datacarpentry.org/python-ecology-lesson/fig/left-join.png)
###Code
jonc_gauche = pd.merge(left=compagnies, right=valeurs_250k,
how='left', on=cle)
# Quelle est la taille de la jonction?
jonc_gauche.shape
jonc_gauche
###Output
_____no_output_____
###Markdown
Les autres types de jonction* `how='right'` : toutes les rangées du second DataFrame sont gardées* `how='outer'` : équivalent d'une union, toutes les rangées sont gardées Exercice - Joindre toutes les données`1`. Créez un nouveau DataFrame tel que tous les enregistrements de `valeurs.csv` aient leurs informations de compagnie au début de la jonction.
###Code
valeurs_cie = pd.merge(left=compagnies, right=valeurs, how='right', on=cle)
valeurs_cie
###Output
_____no_output_____
###Markdown
`2`. Calculez et créez un graphique (*bar-plot*) montrant la moyenne de `Market_Cap_USD` pour chaque `sector_GICS_name`.
###Code
par_sector = valeurs_cie.groupby('sector_GICS_name')
moyenne_cap = par_sector['Market_Cap_USD'].mean()
moyenne_cap.plot(kind='bar')
###Output
_____no_output_____
###Markdown
`3`. Calculez le nombre d'enregistrements selon le pays et selon la catégorie de "leverage". Créez un bar-plot montrant le nombre d'enregistrements par pays, avec une couleur différente par catégorie.
###Code
colonnes = ['country', 'leverage_category']
par_pays_lev = valeurs_cie.groupby(colonnes)
decompte = par_pays_lev["record_id"].count().unstack()
decompte
decompte.plot(kind='bar', stacked=True, logy=True)
###Output
_____no_output_____ |
cleaning/01_cleaning_code_by_state/MO_cleaning.ipynb | ###Markdown
Read in federal level data
###Code
fiscal = pd.read_sas('../../data/fiscal2018', format = 'sas7bdat', encoding='iso-8859-1')
###Output
_____no_output_____
###Markdown
Generate list of districts in the state in the federal data
###Code
state_fiscal = fiscal[(fiscal['STABBR'] == abbr) & (fiscal['GSHI'] == '12')]
len(state_fiscal)
state_fiscal.head()
###Output
_____no_output_____
###Markdown
Read in state level data
###Code
state_grads = pd.read_excel('../../data/state_data_raw/' + file)
state_grads
###Output
_____no_output_____
###Markdown
Filter results.
###Code
state_grads = state_grads[state_grads['END_GRADE'] == 12]
###Output
_____no_output_____
###Markdown
Select and rename columns.
###Code
state_grads['Total'] = np.full_like(state_grads['DISTRICT_NAME'], '')
state_grads = state_grads[['DISTRICT_NAME', 'Total', 'S5_PRIOR_4YR_GRAD_RATE']]
state_grads.columns = ['District Name', 'Total', 'Graduation Rate']
state_grads
###Output
_____no_output_____
###Markdown
Convert data types.
###Code
state_grads['Total'] = pd.to_numeric(state_grads['Total'])
state_grads['Graduation Rate'] = pd.to_numeric(state_grads['Graduation Rate']) / 100
###Output
_____no_output_____
###Markdown
Check for matches and non-matches in the two lists. Names all capitalized to catch as many matches as possible.
###Code
# state_grads['District Name'] = state_grads['District Name'].astype(str).str.upper()
# state_fiscal['NAME'] = state_fiscal['NAME'].astype(str).str.upper()
# state_fiscal['NAME'] = state_fiscal['NAME'].astype(str).str.replace('.', '')
# state_fiscal['NAME'] = state_fiscal['NAME'].astype(str).str.replace(',', '')
# state_fiscal['NAME'] = state_fiscal['NAME'].astype(str).str.replace(' DISTRICT', '')
# state_fiscal['NAME'] = state_fiscal['NAME'].astype(str).str.replace(' DIST', '')
# state_fiscal['NAME'] = state_fiscal['NAME'].astype(str).str.replace(' PUBLIC', '')
# state_fiscal['NAME'] = state_fiscal['NAME'].astype(str).str.replace(' SCHOOLS', '')
# state_fiscal['NAME'] = state_fiscal['NAME'].astype(str).str.replace(' SCHOOL', '')
# state_grads['District Name'] = state_grads['District Name'].astype(str).str.replace('.', '')
# state_grads['District Name'] = state_grads['District Name'].astype(str).str.replace(',', '')
# state_grads['District Name'] = state_grads['District Name'].astype(str).str.replace(' DISTRICT', '')
# state_grads['District Name'] = state_grads['District Name'].astype(str).str.replace(' DIST', '')
# state_grads['District Name'] = state_grads['District Name'].astype(str).str.replace(' PUBLIC', '')
# state_grads['District Name'] = state_grads['District Name'].astype(str).str.replace(' SCHOOLS', '')
# state_grads['District Name'] = state_grads['District Name'].astype(str).str.replace(' SCHOOL', '')
matches = [name for name in list(state_grads['District Name']) if name in list(state_fiscal['NAME'])]
matches.sort()
len(matches)
A = [name for name in list(state_grads['District Name']) if name not in list(state_fiscal['NAME'])]
A.sort()
A
B = [name for name in list(state_fiscal['NAME']) if name not in list(state_grads['District Name'])]
B.sort()
B
###Output
_____no_output_____
###Markdown
No remaining matches I can find.
###Code
#state_fiscal_rename = {}
#state_fiscal = state_fiscal.replace(state_fiscal_rename)
###Output
_____no_output_____
###Markdown
Merge federal and state data, keeping only matches between the two.
###Code
state_grads_merged = pd.merge(state_fiscal, state_grads, how='inner', left_on='NAME', right_on='District Name')
###Output
_____no_output_____
###Markdown
Save cleaned data.
###Code
state_grads_merged.to_csv('../../data/state_data_merged/' + abbr + '.csv', index=False)
###Output
_____no_output_____ |
Rightmove.ipynb | ###Markdown
Rightmove - Building a dataset of property listings---In this notebook, I will be demonstrating how to scrape property listings from Rightmove.co.uk, the UK's largest property listing website. This notebook will create a csv file with the following information for each property listing:* Price* Property link* Number of bedrooms* Address of propertyIt is possible to dive deeper into each property and scrape specific information on each property, such as the description of the property, distance to nearest tube station and schools around the property. However, going into such detail for each property slows down the code quite a bit and I don't need such detailed information for my current project. I will leave it up to you to decide how much information you require.
###Code
# importing our libraries
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
import time
import random
"""
Rightmove uses specific codes to describe each London borough, I have
manually collected these codes by search for each borough individually.
"""
BOROUGHS = {
"City of London": "5E61224",
"Barking and Dagenham": "5E61400",
"Barnet": "5E93929",
"Bexley": "5E93932",
"Brent": "5E93935",
"Bromley": "5E93938",
"Camden": "5E93941",
"Croydon": "5E93944",
"Ealing": "5E93947",
"Enfield": "5E93950",
"Greenwich": "5E61226",
"Hackney": "5E93953",
"Hammersmith and Fulham": "5E61407",
"Haringey": "5E61227",
"Harrow": "5E93956",
"Havering": "5E61228",
"Hillingdon": "5E93959",
"Hounslow": "5E93962",
"Islington": "5E93965",
"Kensington and Chelsea": "5E61229",
"Kingston upon Thames": "5E93968",
"Lambeth": "5E93971",
"Lewisham": "5E61413",
"Merton": "5E61414",
"Newham": "5E61231",
"Redbridge": "5E61537",
"Richmond upon Thames": "5E61415",
"Southwark": "5E61518",
"Sutton": "5E93974",
"Tower Hamlets": "5E61417",
"Waltham Forest": "5E61232",
"Wandsworth": "5E93977",
"Westminster": "5E93980",
}
def main():
# initialise index, this tracks the page number we are on. every additional page adds 24 to the index
# create lists to store our data
all_apartment_links = []
all_description = []
all_address = []
all_price = []
# apparently the maximum page limit for rightmove is 42
for borough in list(BOROUGHS.values()):
# initialise index, this tracks the page number we are on. every additional page adds 24 to the index
index = 0
key = [key for key, value in BOROUGHS.items() if value == borough]
print(f"We are scraping the borough named: {key}")
for pages in range(41):
# define our user headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36"
}
# the website changes if the you are on page 1 as compared to other pages
if index == 0:
rightmove = f"https://www.rightmove.co.uk/property-for-sale/find.html?locationIdentifier=REGION%{borough}&sortType=6&propertyTypes=&includeSSTC=false&mustHave=&dontShow=&furnishTypes=&keywords="
elif index != 0:
rightmove = f"https://www.rightmove.co.uk/property-for-sale/find.html?locationIdentifier=REGION%{borough}&sortType=6&index={index}&propertyTypes=&includeSSTC=false&mustHave=&dontShow=&furnishTypes=&keywords="
# request our webpage
res = requests.get(rightmove, headers=headers)
# check status
res.raise_for_status()
soup = BeautifulSoup(res.text, "html.parser")
# This gets the list of apartments
apartments = soup.find_all("div", class_="l-searchResult is-list")
# This gets the number of listings
number_of_listings = soup.find(
"span", {"class": "searchHeader-resultCount"}
)
number_of_listings = number_of_listings.get_text()
number_of_listings = int(number_of_listings.replace(",", ""))
for i in range(len(apartments)):
# tracks which apartment we are on in the page
apartment_no = apartments[i]
# append link
apartment_info = apartment_no.find("a", class_="propertyCard-link")
link = "https://www.rightmove.co.uk" + apartment_info.attrs["href"]
all_apartment_links.append(link)
# append address
address = (
apartment_info.find("address", class_="propertyCard-address")
.get_text()
.strip()
)
all_address.append(address)
# append description
description = (
apartment_info.find("h2", class_="propertyCard-title")
.get_text()
.strip()
)
all_description.append(description)
# append price
price = (
apartment_no.find("div", class_="propertyCard-priceValue")
.get_text()
.strip()
)
all_price.append(price)
print(f"You have scrapped {pages + 1} pages of apartment listings.")
print(f"You have {number_of_listings - index} listings left to go")
print("\n")
# code to ensure that we do not overwhelm the website
time.sleep(random.randint(1, 3))
# Code to count how many listings we have scrapped already.
index = index + 24
if index >= number_of_listings:
break
# convert data to dataframe
data = {
"Links": all_apartment_links,
"Address": all_address,
"Description": all_description,
"Price": all_price,
}
df = pd.DataFrame.from_dict(data)
df.to_csv(r"sales_data.csv", encoding="utf-8", header="true", index = False)
if __name__ == "__main__":
main()
###Output
_____no_output_____ |
notebooks/curve_interface_methyl.ipynb | ###Markdown
Part 0: Prepare Required file
###Code
host = 'me1'
n_bp = 13
find_helix_folder = '/Users/alayah361/Documents/Research/work/methylation/cg_13/curves/'
prep_helix = PrepareHelix(find_helix_folder, host, n_bp)
print(f'cd {prep_helix.workfolder}')
# prep_helix.copy_input_xtc()
# prep_helix.copy_input_pdb()
###Output
_____no_output_____
###Markdown
Cut trajectory for testcase`gmx trjcat -f bdna+bdna.all.xtc -o temp.xtc -e 1000` `mv temp.xtc bdna+bdna.all.xtc` Part 1: assign number of base-pairs
###Code
n_bp = 13
pdb_in = path.join(prep_helix.input_folder, 'bdna+bdna.npt4.all.pdb')
xtc_in = path.join(prep_helix.input_folder, 'bdna+bdna.all.xtc')
###Output
_____no_output_____
###Markdown
Part 2: Convert xtc to dcd
###Code
cmd = f'vmd -pdb {prep_helix.pdb_in} {prep_helix.xtc_in}'
print(cmd)
# In vmd tkconsole
dcd_out = path.join(prep_helix.input_folder, 'bdna+bdna.0_1ns.10frames.dcd')
print('In vmd tkconsole')
cmd = f'animate write dcd {dcd_out} beg 1 end 11 waitfor all'
print(cmd)
###Output
vmd -pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna+bdna.npt4.all.pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna+bdna.all.xtc
In vmd tkconsole
animate write dcd /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna+bdna.0_1ns.10frames.dcd beg 1 end 11 waitfor all
###Markdown
Part 3: Change B-chain ID from 1-13 to 14-26
###Code
pdb_modified = path.join(prep_helix.input_folder, 'bdna_modi.pdb')
# check pdb, to see whether require to change resid
cmd = f'vim {prep_helix.pdb_in}'
print(cmd)
print(':5,414s/$/A/')
print(':415,$s/$/B/')
reader = PDBReader(pdb_in, skip_header=4, skip_footer=2, segid_exist=True)
atgs = reader.get_atomgroup()
# Change resid
resid_offset = n_bp
for atom in atgs:
if atom.segid == 'B':
atom.resid += resid_offset
writer = PDBWriter(pdb_modified, atgs)
writer.write_pdb()
###Output
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna_modi.pdb
###Markdown
Part 4: Initialize FindHelixAgent
###Code
f_agent = FindHelixAgent(prep_helix.workfolder, pdb_modified, dcd_out, n_bp)
###Output
/home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_allatoms exists
/home/yizaochen/Desktop/methyl_dna/cg_13_meth1/curve_workdir_0_11 exists
/home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis exists
/home/yizaochen/Desktop/methyl_dna/cg_13_meth1/haxis_smooth exists
There are 11 frames.
###Markdown
Part 5: Extract single pdb from dcd
###Code
f_agent.extract_pdb_allatoms()
###Output
_____no_output_____
###Markdown
Part 6: Execute Curve+ and Convert to H-axis pdb
###Code
# Smooth Curve, contain a lot of pseudo-atoms
f_agent.curveplus_find_smooth_haxis()
# Only n_bp beads
f_agent.curveplus_find_haxis()
###Output
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.0.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.1.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.2.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.3.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.4.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.5.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.6.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.7.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.8.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.9.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.10.pdb
###Markdown
Part 7: Use VMD to show
###Code
rootfolder = '/home/yizaochen/Desktop/methyl_dna'
host = 'cg_13_meth1'
workfolder = path.join(rootfolder, host)
frame_id = 0
allatom_pdb = path.join(workfolder, 'pdbs_allatoms', f'{frame_id}.pdb')
haxis_pdb = path.join(workfolder, 'haxis_smooth', f'haxis.smooth.{frame_id}.pdb')
cmd = 'cd /home/yizaochen/codes/bentdna'
print(cmd)
cmd = f'vmd -pdb {allatom_pdb}'
print(cmd)
cmd = f'mol new {haxis_pdb} type pdb'
print(cmd)
cmd = f'source ./tcl/draw_aa_haxis.tcl'
print(cmd)
###Output
cd /home/yizaochen/codes/bentdna
vmd -pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_allatoms/0.pdb
mol new /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/haxis_smooth/haxis.smooth.0.pdb type pdb
source ./tcl/draw_aa_haxis.tcl
###Markdown
Part 8-1: Test, Use VMD to show, make dcd
###Code
haxis_folder = path.join(prep_helix.workfolder, 'pdbs_haxis')
cmd = f'cd {haxis_folder}'
print(cmd)
cmd = 'vmd'
print(cmd)
haxis_tcl = '/home/yizaochen/codes/na_mechanics/make_haxis.tcl'
cmd = f'source {haxis_tcl}'
print(cmd)
start = 0
end = 10
cmd = f'read_all_pdb_files {start} {end}'
print(cmd)
haxis_dcd = path.join(prep_helix.output_folder, 'haxis.dcd')
cmd = f'animate write dcd {haxis_dcd} beg {start} end {end} waitfor all'
print(cmd)
pdb_ref = path.join(prep_helix.workfolder, 'pdbs_haxis', 'haxis.0.pdb')
cmd = f'vmd -pdb {pdb_ref} {haxis_dcd}'
print(cmd)
cmd = f'mol new {prep_helix.pdb_modi}'
print(cmd)
cmd = f'mol addfile {prep_helix.dcd_out_test} 1'
print(cmd)
###Output
vmd -pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.0.pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/output/haxis.dcd
mol new /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna_modi.pdb
mol addfile /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna+bdna.0_1ns.10frames.dcd 1
###Markdown
Part 9: rm pdb_allatoms
###Code
"""
allpdbs = path.join(prep_helix.workfolder, 'pdbs_allatoms', '*')
cmd = f'rm {allpdbs}'
print(cmd)
"""
###Output
rm /home/yizaochen/codes/dna_rna/length_effect/find_helical_axis/pnas_16mer/pdbs_allatoms/*
###Markdown
Part 0: Prepare Required file
###Code
host = 'cg_13_meth1'
n_bp = 13
find_helix_folder = '/home/yizaochen/Desktop/methyl_dna'
prep_helix = PrepareHelix(find_helix_folder, host, n_bp)
print(f'cd {prep_helix.workfolder}')
prep_helix.copy_input_xtc()
prep_helix.copy_input_pdb()
###Output
_____no_output_____
###Markdown
Cut trajectory for testcase`gmx trjcat -f bdna+bdna.all.xtc -o temp.xtc -e 1000` `mv temp.xtc bdna+bdna.all.xtc` Part 1: assign number of base-pairs
###Code
n_bp = 13
pdb_in = path.join(prep_helix.input_folder, 'bdna+bdna.npt4.all.pdb')
xtc_in = path.join(prep_helix.input_folder, 'bdna+bdna.all.xtc')
###Output
_____no_output_____
###Markdown
Part 2: Convert xtc to dcd
###Code
cmd = f'vmd -pdb {prep_helix.pdb_in} {prep_helix.xtc_in}'
print(cmd)
# In vmd tkconsole
dcd_out = path.join(prep_helix.input_folder, 'bdna+bdna.0_1ns.10frames.dcd')
print('In vmd tkconsole')
cmd = f'animate write dcd {dcd_out} beg 1 end 11 waitfor all'
print(cmd)
###Output
vmd -pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna+bdna.npt4.all.pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna+bdna.all.xtc
In vmd tkconsole
animate write dcd /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna+bdna.0_1ns.10frames.dcd beg 1 end 11 waitfor all
###Markdown
Part 3: Change B-chain ID from 1-13 to 14-26
###Code
pdb_modified = path.join(prep_helix.input_folder, 'bdna_modi.pdb')
# check pdb, to see whether require to change resid
cmd = f'vim {prep_helix.pdb_in}'
print(cmd)
print(':5,414s/$/A/')
print(':415,$s/$/B/')
reader = PDBReader(pdb_in, skip_header=4, skip_footer=2, segid_exist=True)
atgs = reader.get_atomgroup()
# Change resid
resid_offset = n_bp
for atom in atgs:
if atom.segid == 'B':
atom.resid += resid_offset
writer = PDBWriter(pdb_modified, atgs)
writer.write_pdb()
###Output
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna_modi.pdb
###Markdown
Part 4: Initialize FindHelixAgent
###Code
f_agent = FindHelixAgent(prep_helix.workfolder, pdb_modified, dcd_out, n_bp)
###Output
/home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_allatoms exists
/home/yizaochen/Desktop/methyl_dna/cg_13_meth1/curve_workdir_0_11 exists
/home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis exists
/home/yizaochen/Desktop/methyl_dna/cg_13_meth1/haxis_smooth exists
There are 11 frames.
###Markdown
Part 5: Extract single pdb from dcd
###Code
f_agent.extract_pdb_allatoms()
###Output
_____no_output_____
###Markdown
Part 6: Execute Curve+ and Convert to H-axis pdb
###Code
# Smooth Curve, contain a lot of pseudo-atoms
f_agent.curveplus_find_smooth_haxis()
# Only n_bp beads
f_agent.curveplus_find_haxis()
###Output
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.0.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.1.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.2.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.3.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.4.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.5.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.6.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.7.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.8.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.9.pdb
Write PDB: /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.10.pdb
###Markdown
Part 7: Use VMD to show
###Code
rootfolder = '/home/yizaochen/Desktop/methyl_dna'
host = 'cg_13_meth1'
workfolder = path.join(rootfolder, host)
frame_id = 0
allatom_pdb = path.join(workfolder, 'pdbs_allatoms', f'{frame_id}.pdb')
haxis_pdb = path.join(workfolder, 'haxis_smooth', f'haxis.smooth.{frame_id}.pdb')
cmd = 'cd /home/yizaochen/codes/bentdna'
print(cmd)
cmd = f'vmd -pdb {allatom_pdb}'
print(cmd)
cmd = f'mol new {haxis_pdb} type pdb'
print(cmd)
cmd = f'source ./tcl/draw_aa_haxis.tcl'
print(cmd)
###Output
cd /home/yizaochen/codes/bentdna
vmd -pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_allatoms/0.pdb
mol new /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/haxis_smooth/haxis.smooth.0.pdb type pdb
source ./tcl/draw_aa_haxis.tcl
###Markdown
Part 8-1: Test, Use VMD to show, make dcd
###Code
haxis_folder = path.join(prep_helix.workfolder, 'pdbs_haxis')
cmd = f'cd {haxis_folder}'
print(cmd)
cmd = 'vmd'
print(cmd)
haxis_tcl = '/home/yizaochen/codes/na_mechanics/make_haxis.tcl'
cmd = f'source {haxis_tcl}'
print(cmd)
start = 0
end = 10
cmd = f'read_all_pdb_files {start} {end}'
print(cmd)
haxis_dcd = path.join(prep_helix.output_folder, 'haxis.dcd')
cmd = f'animate write dcd {haxis_dcd} beg {start} end {end} waitfor all'
print(cmd)
pdb_ref = path.join(prep_helix.workfolder, 'pdbs_haxis', 'haxis.0.pdb')
cmd = f'vmd -pdb {pdb_ref} {haxis_dcd}'
print(cmd)
cmd = f'mol new {prep_helix.pdb_modi}'
print(cmd)
cmd = f'mol addfile {prep_helix.dcd_out_test} 1'
print(cmd)
###Output
vmd -pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/pdbs_haxis/haxis.0.pdb /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/output/haxis.dcd
mol new /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna_modi.pdb
mol addfile /home/yizaochen/Desktop/methyl_dna/cg_13_meth1/input/bdna+bdna.0_1ns.10frames.dcd 1
###Markdown
Part 9: rm pdb_allatoms
###Code
"""
allpdbs = path.join(prep_helix.workfolder, 'pdbs_allatoms', '*')
cmd = f'rm {allpdbs}'
print(cmd)
"""
###Output
rm /home/yizaochen/codes/dna_rna/length_effect/find_helical_axis/pnas_16mer/pdbs_allatoms/*
|
health-students.ipynb | ###Markdown
Risk Adjustment and Machine Learning Loading health data
###Code
# Import pandas and assign to it the pd alias
import pandas as pd
# Load csv to pd.dataframe using pd.read_csv
df_salud = pd.read_csv('../suficiencia.csv')
# Index is not appropiately set
print(df_salud.head())
# pd.read_csv inferred unconvenient data types for some columns
for columna in df_salud.columns:
print(columna,df_salud[columna].dtype)
# TO DO: declare a dict named dtype with column names as keys and data types as values
# We need MUNI_2010, MUNI_2011, DPTO_2010 and DPTO_2011 as data type 'category'. We need SEXO_M and SEXO_F as bool as well.
dtype = {}
# TO DO: declare a integer variable with the column number to be taken as index
index_col =
# We reload csv file using index_col and dtype parameters
df_salud = pd.read_csv('../suficiencia.csv',index_col= index_col,dtype=dtype)
# Index is appropriately set
print(df_salud.head())
# TO DO: check pd.read_csv has convenient data types
# Check last code cell for help.
# TO DO: print mean value for expenditure in 2010 and 2011
# Expenditure is given by variables 'VALOR_TOT_2010' and 'VALOR_TOT_2011'
###Output
_____no_output_____
###Markdown
Exploring health dataWe are interested in exploring risk profiles of individuals. Lets estimate expenditure and enrollee density distribution for different expenditure intervals. We will consider intervals of \$10,000 COP between \$0 and \$3,000,000 COP.
###Code
# We will be using plotly to graph the distributions.
import plotly
import plotly.graph_objs as go
plotly.offline.init_notebook_mode(connected=True)
# Set interval and step size
tamanho = 10**6*3
step_size = 10**4
# Enrollee distribution is straightforward using plotly.
trace2010 = go.Histogram(
x=df_salud['VALOR_TOT_2010'],
name='2010',
histnorm='probability',
xbins=dict(start=0.0,end=tamanho,size=step_size),
legendgroup='2010'
)
# TO DO: declare a second trace for the 2011 enrollee distribution
trace2011 = go.Histogram(
)
layout = go.Layout(
legend=dict(
xanchor='center',
yanchor='top',
orientation='h',
y=-0.25,
x=0.5,
),
yaxis=dict(
title='Density',
rangemode='tozero'
),
xaxis=dict(
title='Expenditure'
),
title='Enrolle density'
)
# TO DO Add both traces to a list and pass it to go.Figure data parameter
fig = go.Figure(data=, layout=layout)
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
Expenditure distribution needs extra work since we are accumulating expenditure and not enrollees. For this purpose we first sort enrollees, then we calculate accumulated expenditure up to each interval and normalize it by total expenditure and finally we differentiate the series.
###Code
# TO DO: import numpy with alias np
# TO DO: write function to calculate expenditure cumulative density for a given year
def calculate_expenditure_cumulative_density(year):
return cumulative_density
density_2010 = np.diff(calculate_expenditure_cumulative_density('2010'))
density_2011 = np.diff(calculate_expenditure_cumulative_density('2011'))
# TO DO: declare a trace for 2010 expenditure distribution. Use color '#1f77b4' for markers.
trace_2010 = go.Scatter(
)
trace_2011 = go.Scatter(
x=list(range(0,tamanho,step_size)),
y=density_2011,
legendgroup='2011',
name='2011',
marker=dict(color='#ff7f0e'),
type='bar'
)
layout = go.Layout(
legend=dict(
xanchor='center',
yanchor='top',
orientation='h',
y=-0.25,
x=0.5,
),
yaxis=dict(
title='Density',
rangemode='tozero'
),
xaxis=dict(
title='Expenditure'
),
title='Expenditure density'
)
# Add both traces to a list and pass it to go.Figure data parameter. Add the layout parameter as well.
fig = go.Figure(data=,layout=)
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
How about cumulative density for enrollees and expenditure? Enrollee cumulative density needs some extra work since we did not explicitly calculate enrollee density before.
###Code
# We will be using scipy
from scipy import stats
# TO DO: scipy.stats.percentileofscore(series,score) returns percentile value of score in series
def calculate_enrollee_cumulative_density(year):
return cumulative_density
enrollee_cumulative_density_2010 = calculate_enrollee_cumulative_density('2010')
enrollee_cumulative_density_2011 = calculate_enrollee_cumulative_density('2011')
expenditure_cumulative_density_2010 = calculate_expenditure_cumulative_density('2010')
expenditure_cumulative_density_2011 = calculate_expenditure_cumulative_density('2011')
# TO DO: Create cumulative expenditure and enrollees traces and plot them. Use previous color conventions.
trace_enrollee_2010 = go.Scatter(
)
trace_enrollee_2011 = go.Scatter(
)
trace_expenditure_2010 = go.Scatter(
)
trace_expenditure_2011 = go.Scatter(
)
layout = go.Layout(
legend=dict(
xanchor='center',
yanchor='top',
orientation='h',
y=-0.25,
x=0.5,
),
yaxis=dict(
title='Cumulative density (%)',
rangemode='tozero'
),
xaxis=dict(
title='Expenditure'
),
title='Cumulative density of enrollees and expenditure'
)
###Output
_____no_output_____
###Markdown
Benchmarking the problemBefore fitting any models it is convenient to have a benchmark from a model as simple as possible. We estimate the mean absolute error (MAE) of the simple model $$ y_{it}^{pred} = \frac{1}{N}\sum_{N}{y_{it}} $$
###Code
ymean = df_salud['VALOR_TOT_2011'].mean()
# TO DO : write a function that calculates beanchmark MAE
def calculate_benchmark_mae(row):
return mae
print('BENCHMARK MAE',df_salud.apply(calculate_mae,axis=1).mean())
###Output
_____no_output_____
###Markdown
MSPS risk adjustment Colombian Ministry of Health and Social Protection currently employs a linear regression of annual health expenditure on sociodemographic risk factors that include gender, age groups and location as the risk-adjustment mechanism.$$y_{it} = \beta_{0} + \sum_{K}{\beta_{j}D_{jit}} + \epsilon_{i}$$ We will start by calculating age groups from variable 'EDAD_2011'.
###Code
# Creating a grouping variable is straightforward with pd.cut
bins = [0,1,4,18,44,49,54,59,64,69,74,150]
labels = ['0_1','2_4','5_18','19_44','45_49','50_54','55_59','60_64','65_69','70_74','74_']
df_salud['AGE_GROUP'] = pd.cut(df_salud['EDAD_2011'],bins,labels=labels,include_lowest=True)
print(df_salud[['EDAD_2011','AGE_GROUP']])
# We also need to create dummy variables using pd.get_dummies
age_group_dummies = pd.get_dummies(df_salud['AGE_GROUP'],prefix='AGE_GROUP')
df_salud = pd.concat([df_salud,age_group_dummies],axis=1)
for column in df_salud.columns:
print(column)
###Output
_____no_output_____
###Markdown
We also need to group location codes into government defined categories. This requires some extra work. Make sure you have divipola.csv file in your home.
###Code
# Download divipola.csv from your email and mo
divipola = pd.read_csv('../divipola.csv',index_col=0)
def give_location_group(row,divipola=divipola):
codigo_dpto = str(row['DPTO_2011']).rjust(2,'0')
codigo_muni = str(row['MUNI_2011']).rjust(3,'0')
codigo = codigo_dpto + codigo_muni
try:
grupo = divipola.loc[int(codigo)]['zona']
# Exception management for a single observation where last digit of municipality code is not valid
except KeyError:
return 'C'
return grupo
location_group_dummies = pd.get_dummies(df_salud.apply(give_location_group,axis=1),prefix='LOCATION_GROUP')
df_salud = pd.concat([df_salud,location_group_dummies],axis=1)
for column in df_salud.columns:
print(column)
###Output
_____no_output_____
###Markdown
Now we are ready to fit MSPS linear model.
###Code
# We will be using sklearn
from sklearn import linear_model
from sklearn.model_selection import cross_val_score
# Feature space
# One reference category is excluded for each dummy group
features = ['SEXO_M',
'AGE_GROUP_2_4',
'AGE_GROUP_5_18',
'AGE_GROUP_19_44',
'AGE_GROUP_45_49',
'AGE_GROUP_50_54',
'AGE_GROUP_55_59',
'AGE_GROUP_60_64',
'AGE_GROUP_65_69',
'AGE_GROUP_70_74',
'AGE_GROUP_74_',
'LOCATION_GROUP_N',
'LOCATION_GROUP_Z',]
# Target space
target = ['VALOR_TOT_2011']
# TO DO: calculate 10 cv mae for linear regression model using sklearn.model_selection.cross_val_score. Take a look at the needed parameters.
reg = linear_model.LinearRegression()
neg_mae = cross_val_score(estimator=,X=,y=,cv=,scoring=)
print('REGRESSION MAE',-1*neg_mae.mean())
reg = reg.fit(df_salud[features].values,df_salud[target].values)
# TO DO: predict over enrollees with enrollees with 2011 expenditure above $3,000,000
upper =
y_pred_upper = [y[0] for y in reg.predict(upper[features])]
print('REGRESSION MAE UPPER',(y_pred_upper-upper['VALOR_TOT_2011']).mean())
# TO DO: predict over enrollees with enrollees with 2011 expenditure below or equal to $3,000,000
lower =
y_pred_lower = [y[0] for y in reg.predict(lower[features])]
print('REGRESSION MAE LOWER',(y_pred_lower-lower['VALOR_TOT_2011']).mean())
###Output
_____no_output_____
###Markdown
Risk adjustment using machine learningHow about a regression tree?
###Code
# We will be using sklearn
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_score
# Feature space
# One reference category is excluded for each dummy group
features = ['SEXO_M',
'AGE_GROUP_2_4',
'AGE_GROUP_5_18',
'AGE_GROUP_19_44',
'AGE_GROUP_45_49',
'AGE_GROUP_50_54',
'AGE_GROUP_55_59',
'AGE_GROUP_60_64',
'AGE_GROUP_65_69',
'AGE_GROUP_70_74',
'AGE_GROUP_74_',
'LOCATION_GROUP_N',
'LOCATION_GROUP_Z',]
# Target space
target = ['VALOR_TOT_2011']
reg_tree = DecisionTreeRegressor(min_samples_leaf=1000)
neg_mae = cross_val_score(estimator=,X=,y=,cv=,scoring=)
print('TREE REGRESSION MAE',-1*neg_mae.mean())
###Output
_____no_output_____
###Markdown
What does a tree look like?
###Code
# We will use modules from sklearn, ipython and pydotplus to visualize trees
from sklearn.externals.six import StringIO
from IPython.display import Image, display
from sklearn.tree import export_graphviz
import pydotplus
def plot_tree(tree):
dot_data = StringIO()
export_graphviz(
tree,
out_file=dot_data,
filled=True,
special_characters=True,
precision=0,
feature_names=features
)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
display(Image(graph.create_png()))
reg_tree = DecisionTreeRegressor(min_samples_leaf=1000)
reg_tree = reg_tree.fit(df_salud[features].values,df_salud[target].values)
plot_tree(reg_tree)
upper = df_salud[df_salud['VALOR_TOT_2011'] > (3*10**6)]
y_pred_upper = reg_tree.predict(upper[features])
print('TREE REGRESSION MAE UPPER',(y_pred_upper-upper['VALOR_TOT_2011']).mean())
lower = df_salud[df_salud['VALOR_TOT_2011'] <= (3*10**6)]
y_pred_lower = reg_tree.predict(lower[features])
print('TREE REGRESSION MAE LOWER',(y_pred_lower-lower['VALOR_TOT_2011']).mean())
# Feature space
# One reference category is excluded for each dummy group
features = ['SEXO_M',
'AGE_GROUP_2_4',
'AGE_GROUP_5_18',
'AGE_GROUP_19_44',
'AGE_GROUP_45_49',
'AGE_GROUP_50_54',
'AGE_GROUP_55_59',
'AGE_GROUP_60_64',
'AGE_GROUP_65_69',
'AGE_GROUP_70_74',
'AGE_GROUP_74_',
'LOCATION_GROUP_N',
'LOCATION_GROUP_Z',
'DIAG_1_C_2010',
'DIAG_1_P_2010',
'DIAG_1_D_2010',]
# Target space
target = ['VALOR_TOT_2011']
reg_tree = DecisionTreeRegressor(min_samples_leaf=100)
neg_mae = cross_val_score(estimator=reg_tree,X=df_salud[features],y=df_salud[target],cv=10,scoring='neg_mean_absolute_error')
print('TREE REGRESSION MAE',-1*neg_mae.mean())
reg_tree = DecisionTreeRegressor(min_samples_leaf=100)
reg_tree = reg_tree.fit(df_salud[features].values,df_salud[target].values)
plot_tree(reg_tree)
upper = df_salud[df_salud['VALOR_TOT_2011'] > (3*10**6)]
y_pred_upper = reg_tree.predict(upper[features])
print('TREE REGRESSION MAE UPPER',(y_pred_upper-upper['VALOR_TOT_2011']).mean())
lower = df_salud[df_salud['VALOR_TOT_2011'] <= (3*10**6)]
y_pred_lower = reg_tree.predict(lower[features])
print('TREE REGRESSION MAE LOWER',(y_pred_lower-lower['VALOR_TOT_2011']).mean())
###Output
_____no_output_____ |
EMR/smstudio-pyspark-hive-sentiment-analysis.ipynb | ###Markdown
OverviewThis notebook does the following:* Demonstrates how you can visually connect Amazon SageMaker Studio Sparkmagic kernel to a kerberized EMR cluster* Explore and query data from a Hive table * Use the data locally* Provides resources that demonstrate how to use the local data for ML including using SageMaker Processing.----------When using PySpark kernel notebooks, there is no need to create a SparkContext or a HiveContext; those are all created for you automatically when you run the first code cell, and you'll be able to see the progress printed. The contexts are created with the following variable names:- SparkContext (sc)- HiveContext (sqlContext)---------- PySpark magics The PySpark kernel provides some predefined “magics”, which are special commands that you can call with `%%` (e.g. `%%MAGIC` ). The magic command must be the first word in a code cell and allow for multiple lines of content. You can’t put comments before a cell magic.For more information on magics, see [here](http://ipython.readthedocs.org/en/stable/interactive/magics.html). Running locally (%%local)You can use the `%%local` magic to run your code locally on the Jupyter server without going to Spark. When you use %%local all subsequent lines in the cell will be executed locally. The code in the cell must be valid Python code.
###Code
%%local
print("Demo Notebook")
###Output
_____no_output_____
###Markdown
Connection to Kerberized EMR ClusterWhen prompted to enter username and password, use **"user1"** for username and **"pwd1"** for password. In the cell below, the code block is autogenerated. You can generate this code by clicking on the "Cluster" link on the top of the notebook and select the EMR cluster. The "j-xxxxxxxxxxxx" is the cluster id of the cluster selected.
###Code
# %load_ext sagemaker_studio_analytics_extension.magics
# %sm_analytics emr connect --cluster_id j-xxxxxxxxxxxx --auth_type Kerberos --language python
###Output
_____no_output_____
###Markdown
Session information (%%info)Livy is an open source REST server for Spark. When you execute a code cell in a sparkmagic notebook, it creates a Livy session to execute your code. `%%info` magic will display the current Livy session information.
###Code
%%info
###Output
_____no_output_____
###Markdown
In the next cell, we will use the HiveContext to query Hive and look at the databases and tables
###Code
sqlContext = HiveContext(sqlContext)
dbs = sqlContext.sql("show databases")
dbs.show()
tables = sqlContext.sql("show tables")
tables.show()
###Output
_____no_output_____
###Markdown
Next, we will query the movie_reviews table and get the data into a spark dataframe. You can visualize the data from the remote cluster locally in the notebook
###Code
movie_reviews = sqlContext.sql("select * from movie_reviews").cache()
###Output
_____no_output_____
###Markdown
Let's look at the data size and size of each class (positive and negative) and visualize it. You can see that we have a balanced dataset with equal number on both classes (25000 each)
###Code
# Shape
print((movie_reviews.count(), len(movie_reviews.columns)))
# count of both positive and negative sentiments
movie_reviews.groupBy('sentiment').count().show()
pos_reviews = movie_reviews.filter(movie_reviews.sentiment == 'positive').collect()
neg_reviews = movie_reviews.filter(movie_reviews.sentiment == 'negative').collect()
import matplotlib.pyplot as plt
def plot_counts(positive,negative):
plt.rcParams['figure.figsize']=(6,6)
plt.bar(0,positive,width=0.6,label='Positive Reviews',color='Green')
plt.bar(2,negative,width=0.6,label='Negative Reviews',color='Red')
handles, labels = plt.gca().get_legend_handles_labels()
by_label = dict(zip(labels, handles))
plt.legend(by_label.values(), by_label.keys())
plt.ylabel('Count')
plt.xlabel('Type of Review')
plt.tick_params(
axis='x',
which='both',
bottom=False,
top=False,
labelbottom=False)
plt.show()
plot_counts(len(pos_reviews),len(neg_reviews))
%matplot plt
###Output
_____no_output_____
###Markdown
Next, Let's inspect length of reviews using the pyspark.sql.functions module
###Code
from pyspark.sql.functions import length
reviewlengthDF = movie_reviews.select(length('review').alias('Length of Review'))
reviewlengthDF.show()
###Output
_____no_output_____
###Markdown
You can also execute SparkSQL queries using the %%sql magic and save results to a local data frame. This allows for quick data exploration. Max rows returned by default is 2500. You can set the max rows by using the -n argument.
###Code
%%sql -o movie_reviews_sparksql_df -n 10
select * from movie_reviews
###Output
_____no_output_____
###Markdown
You can access and explore the data in the dataframe locally
###Code
%%local
movie_reviews_sparksql_df.head(10)
###Output
_____no_output_____
###Markdown
Session logs (%%logs)You can get the logs of your current Livy session to debug any issues you encounter.
###Code
%%logs
###Output
_____no_output_____
###Markdown
Using the SageMaker Studio sparkmagic kernel, you can train machine learning models in the Spark cluster using the *SageMaker Spark library*. SageMaker Spark is an open source Spark library for Amazon SageMaker. For examples, see [here](https://github.com/aws/sagemaker-sparkexample-using-sagemaker-spark-with-any-sagemaker-algorithm)In this notebook however, we will use SageMaker experiments, trial and estimator to train a model and deploy the model using SageMaker realtime endpoint hostingIn the next cell, we will install the necessary libraries
###Code
%%local
import sys
!{sys.executable} -m pip install -U "sagemaker>=1.72.1,<2.0.0"
!{sys.executable} -m pip install sagemaker-experiments
!{sys.executable} -m pip show sagemaker
###Output
_____no_output_____
###Markdown
Next, we will import libraries and set global definitions
###Code
%%local
import sagemaker
import boto3
import botocore
from botocore.exceptions import ClientError
from time import strftime, gmtime
import json
from sagemaker import get_execution_role
from smexperiments.experiment import Experiment
from smexperiments.trial import Trial
%%local
sess = boto3.Session()
region_name = sess.region_name
role = sagemaker.get_execution_role()
sm_runtime = boto3.Session().client('sagemaker-runtime')
###Output
_____no_output_____
###Markdown
In the next cell, we will create a new S3 bucket that will be used for storing the training and validation data
###Code
%%local
stsclient = boto3.client("sts", region_name=region_name)
s3client = boto3.client("s3", region_name=region_name)
aws_account_id = stsclient.get_caller_identity()["Account"]
bucket = "sagemaker-studio-pyspark-{}-{}".format(region_name, aws_account_id)
key = "sentiment/movie_reviews.csv"
smprocessing_input = "s3://{}/{}".format(bucket, key)
try:
if region_name=="us-east-1":
s3client.create_bucket(Bucket=bucket)
else:
s3client.create_bucket(Bucket=bucket, CreateBucketConfiguration={
'LocationConstraint': region_name})
except ClientError as e:
error_code = e.response['Error']['Code']
message = e.response['Error']['Message']
if error_code == 'BucketAlreadyOwnedByYou':
print ('A bucket with the same name already exists in your account - using the same bucket.')
pass
else:
print("Error->{}:{}".format(error_code, message))
###Output
_____no_output_____
###Markdown
Send the following variables to spark
###Code
%%send_to_spark -i bucket -t str -n bucket
%%send_to_spark -i key -t str -n key
###Output
_____no_output_____
###Markdown
Convert the spark dataframe received by querying the hive table (using the sqlContext.sql above) to a pandas dataframe and upload the data to the S3 bucket
###Code
movie_reviews_df = movie_reviews.toPandas()
import boto3
from io import StringIO
csv_buffer = StringIO()
movie_reviews_df.to_csv(csv_buffer)
s3_resource = boto3.resource('s3')
s3_resource.Object(bucket, key).put(Body=csv_buffer.getvalue())
###Output
_____no_output_____
###Markdown
Pre-process data and feature engineering Amazon SageMaker Processing jobs using the Scikit-learn Processor Pre-process data and feature engineeringAmazon SageMaker Processing jobs using the Scikit-learn ProcessorWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.The cell below shows how to run scikit-learn scripts using a Docker image provided and maintained by SageMaker to preprocess data.Note: We will use a "ml.m5.xlarge" instance as the instance type for sagemaker processing, training and model hosting. If you don't have access to this instance type and see a "ResourceLimitExceeded" error, use another instance type that you have access to. You can also request a service limit increase using AWS Support Center
###Code
%%local
instance_type_smprocessing="ml.m5.xlarge"
instance_type_smtraining="ml.m5.xlarge"
instance_type_smendpoint="ml.m5.xlarge"
%%local
from sagemaker.sklearn.processing import SKLearnProcessor
sklearn_processor = SKLearnProcessor(framework_version='0.20.0',
role=role,
instance_type=instance_type_smprocessing,
instance_count=1)
%%local
print(smprocessing_input)
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=smprocessing_input,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='validation_data',
source='/opt/ml/processing/validation')],
arguments=['--train-test-split-ratio', '0.2']
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description['ProcessingOutputConfig']
for output in output_config['Outputs']:
if output['OutputName'] == 'train_data':
preprocessed_training_data = output['S3Output']['S3Uri']
if output['OutputName'] == 'validation_data':
preprocessed_validation_data = output['S3Output']['S3Uri']
%%local
print(preprocessed_training_data)
print(preprocessed_validation_data)
%%local
prefix = 'blazingtext/supervised'
s3_train_data = preprocessed_training_data
s3_validation_data = preprocessed_validation_data
s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)
###Output
_____no_output_____
###Markdown
Train a SageMaker model Amazon SageMaker ExperimentsAmazon SageMaker Experiments allows us to keep track of model training; organize related models together; and log model configuration, parameters, and metrics to reproduce and iterate on previous models and compare models. Let's create the experiment, trial, and train the model. To reduce cost, the training code below uses spot instances.
###Code
%%local
sm_session = sagemaker.session.Session()
create_date = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
sentiment_experiment = Experiment.create(experiment_name="sentimentdetection-{}".format(create_date),
description="Detect sentiment in text",
sagemaker_boto_client=boto3.client('sagemaker'))
trial = Trial.create(trial_name="sentiment-trial-blazingtext-{}".format(strftime("%Y-%m-%d-%H-%M-%S", gmtime())),
experiment_name=sentiment_experiment.experiment_name,
sagemaker_boto_client=boto3.client('sagemaker'))
container = sagemaker.amazon.amazon_estimator.get_image_uri(region_name, "blazingtext", "latest")
print('Using SageMaker BlazingText container: {} ({})'.format(container, region_name))
%%local
train_use_spot_instances = True
train_max_run=3600
train_max_wait = 3600 if train_use_spot_instances else None
bt_model = sagemaker.estimator.Estimator(container,
role,
train_instance_count=1,
train_instance_type=instance_type_smtraining,
train_volume_size = 30,
input_mode= 'File',
output_path=s3_output_location,
sagemaker_session=sm_session,
train_use_spot_instances=train_use_spot_instances,
train_max_run=train_max_run,
train_max_wait=train_max_wait)
%%local
bt_model.set_hyperparameters(mode="supervised",
epochs=10,
min_count=2,
learning_rate=0.005328,
vector_dim=286,
early_stopping=True,
patience=4,
min_epochs=5,
word_ngrams=2)
%%local
train_data = sagemaker.session.s3_input(s3_train_data, distribution='FullyReplicated',
content_type='text/plain', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3_validation_data, distribution='FullyReplicated',
content_type='text/plain', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data}
%%local
%%time
bt_model.fit(data_channels,
experiment_config={
"ExperimentName": sentiment_experiment.experiment_name,
"TrialName": trial.trial_name,
"TrialComponentDisplayName": "BlazingText-Training",
},
logs=False)
###Output
_____no_output_____
###Markdown
Deploy the model and get predictions
###Code
%%local
text_classifier = bt_model.deploy(initial_instance_count = 1, instance_type = instance_type_smendpoint)
%%local
review = ["please give this one a miss br br kristy swanson and the rest of the cast"
"rendered terrible performances the show is flat flat flat br br"
"i don't know how michael madison could have allowed this one on his plate"
"he almost seemed to know this wasn't going to work out"
"and his performance was quite lacklustre so all you madison fans give this a miss"]
tokenized_review = [' '.join(t.split(" ")) for t in review]
#For retrieving the top k predictions, you can set k in the configuration
payload = {"instances" : tokenized_review}
bt_endpoint_name = text_classifier.endpoint
response = sm_runtime.invoke_endpoint(EndpointName=bt_endpoint_name,
ContentType = 'application/json',
Body=json.dumps(payload))
output = json.loads(response['Body'].read().decode('utf-8'))
#make the output readable
import copy
predictions = copy.deepcopy(output)
for output in predictions:
output['label'] = output['label'][0][9:].upper()
print(predictions)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%local
#Clean up resources created as part of this notebook
#delete endpoint
bt_model.delete_endpoint()
# empty s3 bucket we created
s3_bucket_to_remove = "s3://{}".format(bucket)
!aws s3 rm {s3_bucket_to_remove} --recursive
%%cleanup -f
###Output
_____no_output_____ |
notebooks/6D_mg_KRT_crm_checks.ipynb | ###Markdown
06/29/2020KRT (GDSD6 checks)
###Code
# basic packages
import os, glob
import pandas as pd
import numpy as np; np.random.seed(0)
import itertools
from collections import Counter, defaultdict
import time
# machine learning packages from sklearn
from sklearn.preprocessing import MinMaxScaler #StandardScaler
from sklearn import preprocessing, metrics
from sklearn.feature_selection import VarianceThreshold
from sklearn.model_selection import train_test_split, KFold, cross_validate, cross_val_score, StratifiedKFold
from sklearn.linear_model import LogisticRegression, Lasso, LassoCV
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import LinearSVC
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, VotingClassifier, AdaBoostClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import roc_auc_score, auc, roc_curve, plot_roc_curve, confusion_matrix, accuracy_score
from sklearn.metrics import explained_variance_score
from scipy import interp
import scipy.stats as stats
from subprocess import call
from IPython.display import Image
# for IRF
from functools import reduce
# Needed for the scikit-learn wrapper function
import irf
from irf import (irf_utils, utils,
irf_jupyter_utils)
from irf.ensemble.wrf import RandomForestClassifierWithWeights
from math import ceil
# Import our custom utilities
from imp import reload
# Import tools needed for visualization
import seaborn as sns; sns.set()
import matplotlib
import matplotlib.pyplot as plt
from sklearn.tree import export_graphviz
import pydot
%load_ext autoreload
%autoreload 2
save_dir = '../data/processed/fig4_modelling/KRT_ex'
if not os.path.exists(save_dir):
os.makedirs(save_dir)
THRES=1
normal_tissues = ['Airway','Astrocytes','Bladder','Colon','Esophageal','GDSD6','GM12878','HMEC','Melanocytes','Ovarian',
'Pancreas','Prostate','Renal','Thyroid','Uterine']
normal_tissues_dict = dict(zip(normal_tissues,range(len(normal_tissues))))
rna_df = pd.read_csv('../data/interim/rna/tissue_tpm_sym.csv',index_col=0)
rna_KRT_dict = pd.Series(rna_df.GDSD6.values, index=rna_df.index.values).to_dict()
###Output
_____no_output_____
###Markdown
0. Data Wrangling- import- preprocess
###Code
# import
data_all = pd.read_csv('/Users/mguo123/Google Drive/1_khavari/omics_project-LD/pan_omics/data/processed/tissue_crms/all_count_comb_overall.csv',index_col=0,header=0)
data_all = data_all[data_all.tissue.isin(normal_tissues)]
data_all = data_all[data_all.iloc[:,2:].sum(axis=1)>1e-1]
# expression labels
exp_label = list(np.log10(data_all.exp.values+1e-2))
labels_all = np.array(np.array(exp_label)>THRES)
tissues_label = data_all.tissue.values#np.array((data_all.exp>THRES).values)
tissue_num_labels = data_all.tissue.map(normal_tissues_dict).values
genes_all = data_all.index.values
gene_to_num_dict = dict(zip(np.unique(genes_all),range(len(np.unique(genes_all)))))
genes_num_all = np.vectorize(gene_to_num_dict.get)(genes_all)
print('files_loaded', data_all.shape)
data_all[:5]
## only tfs
data_all.drop(['tissue','exp','num_loop_counts','num_loops','num_atac_regions_pro','num_atac_regions_loop'],axis=1,inplace=True)
data_all.shape
selector = VarianceThreshold()
data_all_varfilt = selector.fit_transform(data_all)
data_all_varfilt_cols = data_all.columns[selector.get_support()]
print(data_all.shape, data_all_varfilt.shape, len(data_all_varfilt_cols))
scaler = MinMaxScaler()
data_all_norm = scaler.fit_transform(data_all_varfilt)
data_all_norm = pd.DataFrame(data_all_norm, columns = data_all_varfilt_cols)
data_all_norm[:5]
###Output
_____no_output_____
###Markdown
1. See if there is even a correlation in expression between GDSD6 genes and some of the famourus TFs (AP2B, JUN, FOS, etc)
###Code
KRT_genes = pd.read_csv('../../rnaseq/unique_gene_lists/'+'GDSD6'+'_genes.txt',header=None).loc[:,0]
len(KRT_genes)
KRT_tfs = sorted(set(list(pd.read_csv('../data/external/krt_tfs_063020.csv')['tfs'])))
print(KRT_tfs,len(KRT_tfs))
# data_all['tissue'] = tissues_label
tfs_feat_dict = defaultdict(list)
for feat in data_all.columns:
tfs_feat_dict[feat.split('_')[0]].append(feat)
KRT_TF_feat = []
KRT_TF_dict = defaultdict(list)
for tf in KRT_tfs:
if tf+'_pro' in data_all.columns:
KRT_TF_feat.append(tf+'_pro')
KRT_TF_dict[tf].append(tf+'_pro')
if tf+'_loop' in data_all.columns:
KRT_TF_feat.append(tf+'_loop')
KRT_TF_dict[tf].append(tf+'_loop')
KRT_TF_dict['all'] = KRT_TF_feat
KRT_crm = data_all[tissues_label=='GDSD6']
KRT_exp_arr = np.array(exp_label)[tissues_label=='GDSD6']
KRT_exp_genes_arr = set(list(KRT_crm.index[np.array(KRT_exp_arr)>THRES]))
KRT_exp_genes_num = len(KRT_exp_genes_arr)#np.where(np.array(KRT_exp_arr)>THRES)[0])
KRT_crm_KRT_genes = KRT_crm[KRT_crm.index.isin(KRT_genes)]
# num KRT expressed genes,
KRT_exp_genes_num, KRT_crm.shape, KRT_exp_genes_num/KRT_crm.shape[0]
KRT_crm#.loc['TP63',]
KRT_tf_expr = {}
not_expr_tf=[]
for tf in KRT_tfs:
exp = rna_KRT_dict.get(tf)
if exp is None:
exp=0
KRT_tf_expr[tf]=exp
if exp<THRES:
not_expr_tf.append(tf)
print(tf)
print(len(not_expr_tf))
###Output
CEBPA
E2F1
FOXF2
KER2
LDB2
OTX1
PBX1
PBX2
POU1F1
POU3F1
POU3F2
POU3F3
POU3F4
POU4F1
POU4F2
POU4F3
POU5F1
POU5F1B
POU6F1
POU6F2
PRRX1
SOX11
TFAP2B
TFAP2D
TWIST2
ZEB1
26
###Markdown
get correlation between expression and feature value
###Code
corr_dict = {}
expr_genes_set = set()
for feat in data_all.columns:
expr_genes = set(list(KRT_crm.index[KRT_crm[feat]>0])).intersection(KRT_exp_genes_arr)
expr_genes_set = expr_genes_set.union(expr_genes)
num_expr_genes = len(expr_genes)
corr_dict[feat] = [np.corrcoef(KRT_exp_arr, KRT_crm[feat])[0][1], feat in KRT_TF_feat, num_expr_genes/KRT_exp_genes]
corr_df = pd.DataFrame.from_dict(corr_dict,orient='index')
corr_df.columns = ['corr', 'KRT_TF','frac_expr_genes']
corr_df['corr_sq'] = corr_df['corr'].apply(lambda x:x**2)
corr_df = corr_df.sort_values('corr_sq',ascending=False)
corr_df.reset_index(inplace=True)
corr_df.columns = ['feat', 'corr', 'KRT_TF','frac_expr_genes', 'corr_sq']
corr_df.reset_index(inplace=True)
corr_df.fillna(0,inplace=True)
print('all', len(expr_genes_set)/KRT_exp_genes)
corr_df.to_csv(os.path.join(save_dir, 'stats_corr_KRT.csv'))
corr_df.sort_values('corr_sq',ascending=False)[:50]
sns.scatterplot(x='index', y='corr_sq',alpha=1, data=corr_df[corr_df.KRT_TF])
###Output
_____no_output_____
###Markdown
Results: we can see that the correlation between expression of KRT genes and TFs footprinting features is quite low, furthermore there seems to not be an association beween the r^2 rank and whether or not the KRT feature is contains a KRT tf (manually annotated)
###Code
# KRT_crm['SOX13_loop'].describe()
# corr_df#[corr_df.KRT_TF]
###Output
_____no_output_____
###Markdown
next we see if the KRT tfs are significantly enriched in KRT genes
###Code
KRT_crm.shape
results = {}
count_all = data_all.sum().sum()
count_KRTgene = KRT_crm.sum().sum() #mat_counts.sum(axis=1)[0], sum first row
for tf, feat_list in KRT_TF_dict.items():
if len(feat_list)>0:
# print(tf)
KRT_crm_KRT_genes = KRT_crm[feat_list]
count_KRTtf_KRTgene = KRT_crm_KRT_genes.sum().sum() # A
count_KRTtf = data_all[feat_list].sum().sum() #mat_counts.sum(axis=0)[0], sum down first col
count_KRTtf_neg = count_KRTtf - count_KRTtf_KRTgene # B
count_neg_KRTgene = count_KRTgene - count_KRTtf_KRTgene #C
count_neg_neg = count_all - count_KRTgene - count_KRTtf_neg #D
mat_counts = np.array([[count_KRTtf_KRTgene,count_neg_KRTgene],
[count_KRTtf_neg, count_neg_neg]]).reshape((2,2))
pseudo = 1
mat_counts_pseudo = mat_counts+pseudo
num_in_1 = mat_counts.sum(axis=1)[0] #count_KRTgene
num_in_2 = mat_counts.sum(axis=0)[0] #count_KRTtf
in_1_and_in_2 = count_KRTtf_KRTgene
in_1_or_in_2 = count_KRTgene +count_KRTtf_neg
in_1 = count_KRTgene
in_2 = count_KRTtf
observed_num = mat_counts[0][0] #count_KRTtf_KRTgene
expected_num = num_in_1*num_in_2/sum(sum(mat_counts))
oddsratio_pseudo, pvalue_pseudo = stats.fisher_exact(mat_counts_pseudo,alternative='greater')
jaccard = in_1_and_in_2/in_1_or_in_2
intersect_over_min = in_1_and_in_2/min(in_1,in_2)
results[tf] = { 'jaccard':jaccard,'intersect_over_min':intersect_over_min,
'intersection':in_1_and_in_2,
'union':in_1_or_in_2,
'num_in_1':num_in_1,'num_in_2':num_in_2,
'observed':observed_num, 'expected':expected_num,
'oddsratio':oddsratio_pseudo, 'pval':pvalue_pseudo}
result_df = pd.DataFrame.from_dict(results,orient='index')
result_df['pval_bonf'] = result_df.pval.apply(lambda x: min(1, x* sum(sum(mat_counts))))#result_df.shape[0]))
result_df['log_pval_bonf'] = result_df.pval_bonf.apply(lambda x: min(100,-np.log10(x+1e-100)))
result_df.to_csv(os.path.join(save_dir, 'stats_fisher_KRTtfs.csv'))
display(result_df[:5])
result_df.shape
result_df_filt
# # for all
# KRT_crm = data_all[tissues_label=='GDSD6']
# KRT_crm_KRT_genes = KRT_crm[KRT_TF_feat]
# count_KRTtf_KRTgene = KRT_crm_KRT_genes.sum().sum() # A
# count_all = data_all.sum().sum()
# count_KRTgene = KRT_crm.sum().sum() #mat_counts.sum(axis=1)[0], sum first row
# count_KRTtf = data_all[KRT_TF_feat].sum().sum() #mat_counts.sum(axis=0)[0], sum down first col
# count_KRTtf_neg = count_KRTtf - count_KRTtf_KRTgene # B
# count_neg_KRTgene = count_KRTgene - count_KRTtf_KRTgene #C
# count_neg_neg = count_all - count_KRTgene - count_KRTtf_neg #D
# mat_counts = np.array([[count_KRTtf_KRTgene,count_neg_KRTgene],
# [count_KRTtf_neg, count_neg_neg]]).reshape((2,2))
# pseudo = 1
# mat_counts_pseudo = mat_counts+pseudo
# num_in_1 = mat_counts.sum(axis=1)[0] #count_KRTgene
# num_in_2 = mat_counts.sum(axis=0)[0] #count_KRTtf
# in_1_and_in_2 = count_KRTtf_KRTgene
# in_1_or_in_2 = count_KRTgene +count_KRTtf_neg
# in_1 = count_KRTgene
# in_2 = count_KRTtf
# observed_num = mat_counts[0][0] #count_KRTtf_KRTgene
# expected_num = num_in_1*num_in_2/sum(sum(mat_counts))
# oddsratio_pseudo, pvalue_pseudo = stats.fisher_exact(mat_counts_pseudo,alternative='greater')
# jaccard = in_1_and_in_2/in_1_or_in_2
# intersect_over_min = in_1_and_in_2/min(in_1,in_2)
# results = { 'jaccard':jaccard,'intersect_over_min':intersect_over_min,
# 'intersection':in_1_and_in_2,
# 'union':in_1_or_in_2,
# 'num_in_1':num_in_1,'num_in_2':num_in_2,
# 'observed':observed_num, 'expected':expected_num,
# 'oddsratio':oddsratio_pseudo, 'pval':pvalue_pseudo}
# # results
###Output
_____no_output_____
###Markdown
do some excel work:see pptx next we check which tf footprints are significantly associated with having KRT genes be expressed
###Code
# expression
KRT_exp_crm = KRT_crm.iloc[np.where(np.array(KRT_exp_arr)>THRES)[0],:]
results = {}
count_all = KRT_crm.sum().sum()
count_KRTgene = KRT_exp_crm.sum().sum() #mat_counts.sum(axis=1)[0], sum first row
for tf, feat_list in tfs_feat_dict.items():
if len(feat_list)>0:
# print(tf)
KRT_crm_KRT_genes = KRT_exp_crm[feat_list]
count_KRTtf_KRTgene = KRT_crm_KRT_genes.sum().sum() # A
if count_KRTtf_KRTgene==0:
continue
count_KRTtf = KRT_crm[feat_list].sum().sum() #mat_counts.sum(axis=0)[0], sum down first col
count_KRTtf_neg = count_KRTtf - count_KRTtf_KRTgene # B
count_neg_KRTgene = count_KRTgene - count_KRTtf_KRTgene #C
count_neg_neg = count_all - count_KRTgene - count_KRTtf_neg #D
mat_counts = np.array([[count_KRTtf_KRTgene,count_neg_KRTgene],
[count_KRTtf_neg, count_neg_neg]]).reshape((2,2))
pseudo = 1
mat_counts_pseudo = mat_counts+pseudo
num_in_1 = mat_counts.sum(axis=1)[0] #count_KRTgene
num_in_2 = mat_counts.sum(axis=0)[0] #count_KRTtf
in_1_and_in_2 = count_KRTtf_KRTgene
in_1_or_in_2 = count_KRTgene +count_KRTtf_neg
in_1 = count_KRTgene
in_2 = count_KRTtf
observed_num = mat_counts[0][0] #count_KRTtf_KRTgene
expected_num = num_in_1*num_in_2/sum(sum(mat_counts))
oddsratio_pseudo, pvalue_pseudo = stats.fisher_exact(mat_counts_pseudo,alternative='greater')
jaccard = in_1_and_in_2/in_1_or_in_2
intersect_over_min = in_1_and_in_2/min(in_1,in_2)
results[tf] = { 'jaccard':jaccard,'intersect_over_min':intersect_over_min,
'intersection':in_1_and_in_2,
'union':in_1_or_in_2,
'num_in_1':num_in_1,'num_in_2':num_in_2,
'observed':observed_num, 'expected':expected_num,
'oddsratio':oddsratio_pseudo, 'pval':pvalue_pseudo}
result_df_exp = pd.DataFrame.from_dict(results,orient='index')
result_df_exp['pval_bonf'] = result_df_exp.pval.apply(lambda x: min(1, x* result_df.shape[0]))
result_df_exp['log_pval_bonf'] = result_df_exp.pval_bonf.apply(lambda x: min(100,-np.log10(x+1e-100)))
result_df_exp['is_KRT_TF'] = result_df_exp.index.isin(KRT_tfs)
result_df_exp.to_csv(os.path.join(save_dir, 'stats_fisher_KRTexp.csv'))
result_df_exp.sort_values('oddsratio',ascending=False)
result_df_exp.sort_values('oddsratio',ascending=False)[:10]
# results[result_idx] = {'color_row':color_1, 'color_col':color_2,
# 'jaccard':jaccard,'intersect_over_min':intersect_over_min,
# 'intersection':len(in_1_and_in_2),
# 'union':len(in_1_or_in_2),
# 'num_in_1':num_in_1,'num_in_2':num_in_2,
# 'observed':observed_num, 'expected':expected_num,
# 'oddsratio':oddsratio_pseudo, 'pval':pvalue_pseudo}
# sns.distplot(KRT_crm_KRT_genes['JUND_loop'])
# def comp_two_gene_dicts(gene_dict1, gene_dict2, pseudo=0):
# """
# 1: rows
# 2: columns
# """
# bg = set(gene_dict1['background']) | set(gene_dict2['background'])
# results = {}
# result_idx = 0
# for color_1 in ['purple','green', 'grey', 'blue']:
# for color_2 in ['purple','green', 'grey', 'blue']:
# geneset_1 = set(gene_dict1[color_1])
# geneset_2 = set(gene_dict2[color_2])
# in_1_and_in_2 = geneset_1 & geneset_2
# in_1_not_2 = geneset_1 - geneset_2
# not_1_in_2 = geneset_2 - geneset_1
# in_1_or_in_2 = geneset_1 | geneset_2
# not_1_not_2 = bg - in_1_or_in_2
# mat_counts = np.array([[len(in_1_and_in_2), len(in_1_not_2)],
# [len(not_1_in_2), len(not_1_not_2)]]).reshape((2,2))
# mat_counts_pseudo = mat_counts+pseudo
# # if (color_1=='grey') & (color_2=='grey'):
# # print(mat_counts)
# num_in_1 = mat_counts.sum(axis=1)[0]
# num_in_2 = mat_counts.sum(axis=0)[0]
# observed_num = mat_counts[0][0]
# expected_num = num_in_1*num_in_2/sum(sum(mat_counts))
# oddsratio_pseudo, pvalue_pseudo = stats.fisher_exact(mat_counts_pseudo,alternative='greater')
# jaccard = len(in_1_and_in_2)/len(in_1_or_in_2)
# intersect_over_min = len(in_1_and_in_2)/min(num_in_1,num_in_2)
# results[result_idx] = {'color_row':color_1, 'color_col':color_2,
# 'jaccard':jaccard,'intersect_over_min':intersect_over_min,
# 'intersection':len(in_1_and_in_2),
# 'union':len(in_1_or_in_2),
# 'num_in_1':num_in_1,'num_in_2':num_in_2,
# 'observed':observed_num, 'expected':expected_num,
# 'oddsratio':oddsratio_pseudo, 'pval':pvalue_pseudo}
# result_idx+=1
# result_df = pd.DataFrame.from_dict(results,orient='index')
# result_df['pval_bonf'] = result_df.pval.apply(lambda x: min(1, x* sum(sum(mat_counts))))#result_df.shape[0]))
# result_df['log_pval_bonf'] = result_df.pval_bonf.apply(lambda x: min(100,-np.log10(x+1e-100)))
# return result_df
###Output
_____no_output_____
###Markdown
2. check relationship between TFAP2C and KLF4
###Code
cols_to_check=['TFAP2C_pro','TFAP2C_loop','KLF4_pro','KLF4_loop']
f = sns.pairplot(KRT_crm[cols_to_check],kind='reg')
plt.savefig(os.path.join(save_dir, 'TFAP2C_KLF4_corr_check.png'))
###Output
_____no_output_____
###Markdown
5. Next check would be for pairs of KRT tfs to see which are significantly enriched in KRT genes
###Code
tfs_w_feats = set()
for col in data_all.columns:
tfs_w_feats.add(col.split('_')[0])
len(tfs_w_feats)
KRT_TF_pair_dict = defaultdict(list)
for tf1 in KRT_tfs:
for tf2 in KRT_tfs:
if (tf1<tf2):
if (tf1 in tfs_w_feats) and (tf2 in tfs_w_feats):
possible_feats = [tf1+'_pro',tf1+'_loop',tf2+'_pro',tf2+'_loop']
for feat in possible_feats:
if feat in data_all.columns:
KRT_TF_pair_dict[tf1+'::'+tf2].append(feat)
# KRT_TF_pair_dict[tf1+'::'+tf2]=[]
len(KRT_TF_pair_dict)
# for key, list_feat in KRT_TF_pair_dict.items():
# if len(list_feat)<4:
# print(key, list_feat)
%%time
counter = 0
results = {}
count_all = data_all.sum().sum()
count_KRTgene = KRT_crm.sum().sum() #mat_counts.sum(axis=1)[0], sum first row
for tf, feat_list in KRT_TF_pair_dict.items():
if len(feat_list)>0:
counter+=1
if (counter%100)==0:
print(counter, tf)
# print(tf)
KRT_crm_KRT_genes = KRT_crm[feat_list]
count_KRTtf_KRTgene = KRT_crm_KRT_genes.sum().sum() # A
count_KRTtf = data_all[feat_list].sum().sum() #mat_counts.sum(axis=0)[0], sum down first col
count_KRTtf_neg = count_KRTtf - count_KRTtf_KRTgene # B
count_neg_KRTgene = count_KRTgene - count_KRTtf_KRTgene #C
count_neg_neg = count_all - count_KRTgene - count_KRTtf_neg #D
mat_counts = np.array([[count_KRTtf_KRTgene,count_neg_KRTgene],
[count_KRTtf_neg, count_neg_neg]]).reshape((2,2))
pseudo = 1
mat_counts_pseudo = mat_counts+pseudo
num_in_1 = mat_counts.sum(axis=1)[0] #count_KRTgene
num_in_2 = mat_counts.sum(axis=0)[0] #count_KRTtf
in_1_and_in_2 = count_KRTtf_KRTgene
in_1_or_in_2 = count_KRTgene +count_KRTtf_neg
in_1 = count_KRTgene
in_2 = count_KRTtf
observed_num = mat_counts[0][0] #count_KRTtf_KRTgene
expected_num = num_in_1*num_in_2/sum(sum(mat_counts))
oddsratio_pseudo, pvalue_pseudo = stats.fisher_exact(mat_counts_pseudo,alternative='greater')
jaccard = in_1_and_in_2/in_1_or_in_2
intersect_over_min = in_1_and_in_2/min(in_1,in_2)
results[tf] = { 'jaccard':jaccard,'intersect_over_min':intersect_over_min,
'intersection':in_1_and_in_2,
'union':in_1_or_in_2,
'num_in_1':num_in_1,'num_in_2':num_in_2,
'observed':observed_num, 'expected':expected_num,
'oddsratio':oddsratio_pseudo, 'pval':pvalue_pseudo}
result_df = pd.DataFrame.from_dict(results,orient='index')
result_df['pval_bonf'] = result_df.pval.apply(lambda x: min(1, x* sum(sum(mat_counts))))#result_df.shape[0]))
result_df['log_pval_bonf'] = result_df.pval_bonf.apply(lambda x: min(100,-np.log10(x+1e-100)))
result_df.to_csv(os.path.join(save_dir, 'stats_fisher_KRTtfs_pairs.csv'))
result_df
result_df_filt = result_df[(result_df.pval_bonf<0.05) & (result_df.oddsratio>1)& (result_df.oddsratio<1000)]
print(result_df_filt.shape, result_df.shape)
result_df_filt.sort_values('oddsratio',ascending=False)[:10]
result_mat_df = pd.DataFrame(index=KRT_tfs, columns = KRT_tfs).fillna(0)
for pair in result_df_filt.index.values:
tf1,tf2 = pair.split('::')
result_mat_df.at[tf1,tf2]=1
result_mat_df.at[tf2,tf1]=1
result_mat_df.to_csv(os.path.join(save_dir, 'stats_fisher_KRTtfs_pairs_mat.csv'))
###Output
_____no_output_____
###Markdown
KRT gene pairs with expression
###Code
counter = 0
results = {}
count_all = KRT_crm.sum().sum()
count_KRTgene = KRT_exp_crm.sum().sum() #mat_counts.sum(axis=1)[0], sum first row
for tf, feat_list in KRT_TF_pair_dict.items():
if len(feat_list)>0:
counter+=1
if (counter%100)==0:
print(counter, tf)
# print(tf)
KRT_crm_KRT_genes = KRT_exp_crm[feat_list]
count_KRTtf_KRTgene = KRT_crm_KRT_genes.sum().sum() # A
if count_KRTtf_KRTgene==0:
continue
count_KRTtf = KRT_crm[feat_list].sum().sum() #mat_counts.sum(axis=0)[0], sum down first col
count_KRTtf_neg = count_KRTtf - count_KRTtf_KRTgene # B
count_neg_KRTgene = count_KRTgene - count_KRTtf_KRTgene #C
count_neg_neg = count_all - count_KRTgene - count_KRTtf_neg #D
mat_counts = np.array([[count_KRTtf_KRTgene,count_neg_KRTgene],
[count_KRTtf_neg, count_neg_neg]]).reshape((2,2))
pseudo = 1
mat_counts_pseudo = mat_counts+pseudo
num_in_1 = mat_counts.sum(axis=1)[0] #count_KRTgene
num_in_2 = mat_counts.sum(axis=0)[0] #count_KRTtf
in_1_and_in_2 = count_KRTtf_KRTgene
in_1_or_in_2 = count_KRTgene +count_KRTtf_neg
in_1 = count_KRTgene
in_2 = count_KRTtf
observed_num = mat_counts[0][0] #count_KRTtf_KRTgene
expected_num = num_in_1*num_in_2/sum(sum(mat_counts))
oddsratio_pseudo, pvalue_pseudo = stats.fisher_exact(mat_counts_pseudo,alternative='greater')
jaccard = in_1_and_in_2/in_1_or_in_2
intersect_over_min = in_1_and_in_2/min(in_1,in_2)
results[tf] = { 'jaccard':jaccard,'intersect_over_min':intersect_over_min,
'intersection':in_1_and_in_2,
'union':in_1_or_in_2,
'num_in_1':num_in_1,'num_in_2':num_in_2,
'observed':observed_num, 'expected':expected_num,
'oddsratio':oddsratio_pseudo, 'pval':pvalue_pseudo}
result_df = pd.DataFrame.from_dict(results,orient='index')
result_df['pval_bonf'] = result_df.pval.apply(lambda x: min(1, x* sum(sum(mat_counts))))#result_df.shape[0]))
result_df['log_pval_bonf'] = result_df.pval_bonf.apply(lambda x: min(100,-np.log10(x+1e-100)))
result_df.to_csv(os.path.join(save_dir, 'stats_fisher_KRTtfs_exp_pairs.csv'))
result_df
result_df_filt = result_df[(result_df.pval<0.05) & (result_df.oddsratio>1)& (result_df.oddsratio<1000)]
print(result_df_filt.shape, result_df.shape)
result_df_filt.sort_values('oddsratio',ascending=False)[:50]
result_df_filt.loc['KLF4::TFAP2C',:]
###Output
_____no_output_____
###Markdown
5. Next check stability score found interactions and annotate of KRT tfs each interaction has
###Code
score_thres=.2
def check_interaction(interaction_str):
feat_arr = interaction_str.split('::')
num_KRT = 0
num_tot = len(feat_arr)
for feat in feat_arr:
if feat in KRT_TF_feat:
num_KRT+=1
return [num_KRT,num_tot]
# GDSD6_stability_df = pd.read_csv('../data/processed/fig4_modelling/irf_manual/test_GDSD6_GDSD6_boosted_stability_score.csv')
GDSD6_stability_df = pd.read_csv('../data/processed/fig4_modelling/irf_manual/test_GDSD6_purple_boosted_stability_score.csv')
GDSD6_stability_df
GDSD6_stability_df[['num_KRT','num_in_interaction']]=GDSD6_stability_df['index'].apply(func=check_interaction).apply(pd.Series)#,result_type='expand')
GDSD6_stability_df['frac_KRT'] = GDSD6_stability_df['num_KRT']/GDSD6_stability_df['num_in_interaction']
sns.pairplot(GDSD6_stability_df[['score','frac_KRT']],kind='reg')
GDSD6_stability_df[(GDSD6_stability_df.score >score_thres)]
GDSD6_stability_df[(GDSD6_stability_df.frac_KRT>0) & (GDSD6_stability_df.score >score_thres)]
###Output
_____no_output_____
###Markdown
SKIN trajectory analysis
###Code
# rna
rna_df = pd.read_csv('../data/interim/rna/tissue_tpm_sym.csv',index_col=0)
rna_df_norm = rna_df[normal_tissues]
rna_D0_dict = pd.Series(rna_df.GDSD0.values, index=rna_df.index.values).to_dict()
rna_D3_dict = pd.Series(rna_df.GDSD3.values, index=rna_df.index.values).to_dict()
rna_D6_dict = pd.Series(rna_df.GDSD6.values, index=rna_df.index.values).to_dict()
%%time
D0_crm = pd.read_csv('/Users/mguo123/Google Drive/1_khavari/omics_project-LD/pan_omics/data/processed/tissue_crms/pro_loop_tissue/GDSD0_crm.csv',index_col=0,header=0)
D0_crm.drop(['tissue','exp','num_loop_counts','num_loops','num_atac_regions_pro','num_atac_regions_loop'],axis=1,inplace=True)
D3_crm = pd.read_csv('/Users/mguo123/Google Drive/1_khavari/omics_project-LD/pan_omics/data/processed/tissue_crms/pro_loop_tissue/GDSD3_crm.csv',index_col=0,header=0)
D3_crm.drop(['tissue','exp','num_loop_counts','num_loops','num_atac_regions_pro','num_atac_regions_loop'],axis=1,inplace=True)
D6_crm = pd.read_csv('/Users/mguo123/Google Drive/1_khavari/omics_project-LD/pan_omics/data/processed/tissue_crms/pro_loop_tissue/GDSD6_crm.csv',index_col=0,header=0)
D6_crm.drop(['tissue','exp','num_loop_counts','num_loops','num_atac_regions_pro','num_atac_regions_loop'],axis=1,inplace=True)
what genes
D0_crm
###Output
_____no_output_____ |
Module_3/Module_3_1.ipynb | ###Markdown
Module 3.1 Goal: make code - **readable**- **reuseable**- testable Approach: **modularization** (create small reuseable pieces of code)- functions- modules Unit 5: Functions Python functions* `len(a)`* `max(a)`* `sum(a)`* `range(start, stop, step)` or `range(stop)`* `print(s)` or `print(s1, s2, ...)` or `print(s, end=" ")` Functions have* **name*** **arguments** (in **parentheses**) — optional (but _always_ parentheses)* **return value** — (can be `None`)
###Code
arg = ['a', 'b', 'c']
retval = len(arg)
retval
retval = print("hello")
print(retval)
retval is None
###Output
_____no_output_____
###Markdown
Defining functions ```pythondef func_name(arg1, arg2, ...): """documentation string (optional)""" body ... return results``` [**Heaviside** step function](http://mathworld.wolfram.com/HeavisideStepFunction.html) (again):$$\Theta(x) = \begin{cases} 0 & x < 0 \\ \frac{1}{2} & x = 0\\ 1 & x > 0 \end{cases}$$
###Code
x = 1.23
theta = None
if x < 0:
theta = 0
elif x > 0:
theta = 1
else:
theta = 0.5
print("theta({0}) = {1}".format(x, theta))
###Output
theta(1.23) = 1
###Markdown
[**Heaviside** step function](http://mathworld.wolfram.com/HeavisideStepFunction.html) (again):$$\Theta(x) = \begin{cases} 0 & x < 0 \\ \frac{1}{2} & x = 0\\ 1 & x > 0 \end{cases}$$
###Code
def Heaviside(x):
theta = None
if x < 0:
theta = 0
elif x > 0:
theta = 1
else:
theta = 0.5
return theta
def Heaviside(x):
theta = None
if x < 0:
theta = 0
elif x > 0:
theta = 1
else:
theta = 0.5
return theta
###Output
_____no_output_____
###Markdown
Now **call** the function:
###Code
Heaviside(0)
Heaviside(1.2)
###Output
_____no_output_____
###Markdown
Add doc string:
###Code
def Heaviside(x):
"""Heaviside step function
Parameters
----------
x : float
Returns
-------
float
"""
theta = None
if x < 0:
theta = 0
elif x > 0:
theta = 1
else:
theta = 0.5
return theta
help(Heaviside)
###Output
Help on function Heaviside in module __main__:
Heaviside(x)
Heaviside step function
Parameters
----------
x : float
Returns
-------
float
###Markdown
Make code more concise:
###Code
def Heaviside(x):
"""Heaviside step function
Parameters
----------
x : float
Returns
-------
float
"""
if x < 0:
return 0
elif x > 0:
return 1
return 0.5
X = [i*0.5 for i in range(-3, 4)]
Y = [Heaviside(x) for x in X]
print(list(zip(X, Y)))
###Output
[(-1.5, 0), (-1.0, 0), (-0.5, 0), (0.0, 0.5), (0.5, 1), (1.0, 1), (1.5, 1)]
###Markdown
Multiple return values Functions always return a single object.- `None`- basic data type (float, int, str, ...)- container data type, e.g. a list or a **tuple**- _any_ object Move a particle at coordinate `r = [x, y]` by a translation vector `[tx, ty]`:
###Code
def translate(r, t):
"""Return r + t for 2D vectors r, t"""
x1 = r[0] + t[0]
y1 = r[1] + t[1]
return [x1, y1]
pos = [1, -1]
tvec = [9, 1]
new_pos = translate(pos, tvec)
new_pos
###Output
_____no_output_____
###Markdown
[Metal umlaut](https://en.wikipedia.org/wiki/Metal_umlaut) search and replace: replace all "o" with "ö" and "u" with "ü": return the new string and the number of replacements.
###Code
def metal_umlaut_search_replace(name):
new_name = ""
counter = 0
for char in name:
if char == "o":
char = "ö"
counter += 1
elif char == "u":
char = "ü"
counter += 1
new_name += char
return new_name, counter # returns the tuple (new_name, counter)
metal_umlaut_search_replace("Motely Crue")
retval = metal_umlaut_search_replace("Motely Crue")
type(retval)
retval[0]
###Output
_____no_output_____
###Markdown
Use *tuple unpacking* to get the returned values:
###Code
name, n = metal_umlaut_search_replace("Motely Crue")
print(name, "rocks", n, "times harder now!")
###Output
Mötely Crüe rocks 2 times harder now!
###Markdown
Variable argument lists Functions have arguments in their _call signature_, e.g., the `x` in `def Heaviside(x)` or `x` and `y` in a function `area()`:
###Code
def area(x, y):
"""Calculate area of rectangle with lengths x and y"""
return x*y
###Output
_____no_output_____
###Markdown
Add functionality: calculate the area when you scale the rectangle with a factor `scale`.
###Code
def area(x, y, scale):
"""Calculate scaled area of rectangle with lengths x and y and scale factor scale"""
return scale*x*y
###Output
_____no_output_____
###Markdown
**Inconvenience**: even for unscaled rectangles I always have to provide `scale=1`, i.e.,```pythonarea(Lx, Ly, 1)``` **Optional argument** with **default value**:
###Code
def area(x, y, scale=1):
"""Calculate scaled area of rectangle with lengths `x` and `y`.
scale factor `scale` defaults to 1
"""
return scale*x*y
Lx, Ly = 2, 10.5
print(area(Lx, Ly)) # uses scale=1
print(area(Lx, Ly, scale=0.5))
print(area(Lx, Ly, scale=2))
print(area(Lx, Ly, 2)) # DISCOURAGED, use scale=2
###Output
42.0
###Markdown
Variable arguments summary```pythondef func(arg1, arg2, kwarg1="abc", kwarg2=None, kwarg3=1): ...```* *positional arguments* (`arg1`, `arg2`): * all need to be provided when function is called: `func(a, b)` * must be in given order* *keyword arguments* `kwarg1="abc" ...`: * optional; set to default if not provided * no fixed order: `func(a, b, kwarg2=1000, kwarg1="other")`See more under [More on Defining Functions](https://docs.python.org/3/tutorial/controlflow.htmlmore-on-defining-functions) in the Python Tutorial. Unit 6: Modules and packages _Modules_ (and _packages_) are **libraries** of reuseable code blocks (e.g. functions).Example: `math` module
###Code
import math
math.sin(math.pi/3)
###Output
_____no_output_____
###Markdown
Creating a moduleA module is just a file with Python code. Create `physics.py` with content```python PHY194 physics modulepi = 3.14159h = 6.62606957e-34def Heaviside(x): """Heaviside function Theta(x)""" if x < 0: return 0 elif x > 0: return 1 return 0.5``` Importing Import it
###Code
import physics
###Output
_____no_output_____
###Markdown
*Note*: `physics.py` must be in the same directory! Access contents with the dot `.` operator:
###Code
two_pi = 2 * physics.pi
h_bar = physics.h / two_pi
print(h_bar)
physics.Heaviside(2)
###Output
_____no_output_____
###Markdown
**Direct import** (use sparingly as it can become messy)
###Code
from physics import h, pi
h_bar = h / (2*pi)
print(h_bar)
###Output
1.0545726160956712e-34
###Markdown
**Aliased** import:
###Code
import physics as phy
h_bar = phy.h / (2*phy.pi)
print(h_bar)
###Output
1.0545726160956712e-34
|
DaisyAI.ipynb | ###Markdown
**Prediction Time**
###Code
model = tf.keras.models.load_model(save_directory)
for path in sorted(glob("./Image_Test/*")):
img = tf.keras.preprocessing.image.load_img(path, target_size=(image_height, image_width, 3))
#Transforme l'image en array, où chaque pixel est une liste, avec 3 valeurs RBG
img_array = tf.keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
#Prédit la ressemblance de l'image avec chaqu'une des classes qu'elle connaît
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(f"L'Image {path} appartient sûrement aux {class_names[np.argmax(score)]} avec {round(np.max(score)*100, 2)}% de sûreté")
###Output
L'Image ./Image_Test/Colin_test2.png appartient sûrement aux /Persian avec 99.68% de sûreté
L'Image ./Image_Test/Colin_test3.png appartient sûrement aux /Persian avec 98.06% de sûreté
L'Image ./Image_Test/birman1.jpg appartient sûrement aux /Birman avec 96.74% de sûreté
L'Image ./Image_Test/birman2.jpg appartient sûrement aux /Ragdoll avec 87.27% de sûreté
L'Image ./Image_Test/birman3.jpg appartient sûrement aux /Ragdoll avec 83.92% de sûreté
L'Image ./Image_Test/birman4.jpg appartient sûrement aux /Ragdoll avec 58.67% de sûreté
L'Image ./Image_Test/birman5.jpg appartient sûrement aux /Birman avec 85.97% de sûreté
L'Image ./Image_Test/colin.png appartient sûrement aux /Persian avec 48.71% de sûreté
L'Image ./Image_Test/demon.png appartient sûrement aux /Sphynx avec 99.39% de sûreté
L'Image ./Image_Test/edgar.jpg appartient sûrement aux /Russian_Blue avec 99.64% de sûreté
L'Image ./Image_Test/kael.png appartient sûrement aux /Persian avec 99.91% de sûreté
L'Image ./Image_Test/kael2.png appartient sûrement aux /Bengal avec 55.26% de sûreté
L'Image ./Image_Test/maine_coon.jpg appartient sûrement aux /Maine_Coon avec 99.34% de sûreté
L'Image ./Image_Test/max.png appartient sûrement aux /Bombay avec 55.33% de sûreté
L'Image ./Image_Test/ragdoll.png appartient sûrement aux /Birman avec 53.12% de sûreté
L'Image ./Image_Test/ragdoll2.jpg appartient sûrement aux /Ragdoll avec 93.8% de sûreté
L'Image ./Image_Test/ragdoll3.jpg appartient sûrement aux /Ragdoll avec 58.79% de sûreté
L'Image ./Image_Test/ragdoll4.jpg appartient sûrement aux /Ragdoll avec 99.47% de sûreté
L'Image ./Image_Test/siamese.jpg appartient sûrement aux /Siamese avec 94.87% de sûreté
L'Image ./Image_Test/siamese2.jpg appartient sûrement aux /Siamese avec 90.82% de sûreté
L'Image ./Image_Test/siamese3.jpg appartient sûrement aux /Siamese avec 96.22% de sûreté
L'Image ./Image_Test/siamese4.jpg appartient sûrement aux /Siamese avec 99.92% de sûreté
L'Image ./Image_Test/sleepy.jpg appartient sûrement aux /Bombay avec 75.69% de sûreté
L'Image ./Image_Test/sleepy2.jpg appartient sûrement aux /Maine_Coon avec 93.99% de sûreté
L'Image ./Image_Test/sleepy3.jpg appartient sûrement aux /Bengal avec 29.93% de sûreté
L'Image ./Image_Test/sylv1.jpg appartient sûrement aux /Persian avec 97.1% de sûreté
L'Image ./Image_Test/sylv2.jpg appartient sûrement aux /Persian avec 75.76% de sûreté
L'Image ./Image_Test/sylv3.jpg appartient sûrement aux /Persian avec 86.71% de sûreté
L'Image ./Image_Test/test1.jpg appartient sûrement aux /British_Shorthair avec 29.51% de sûreté
L'Image ./Image_Test/test10.png appartient sûrement aux /Persian avec 53.76% de sûreté
L'Image ./Image_Test/test2.jpg appartient sûrement aux /Bombay avec 59.2% de sûreté
L'Image ./Image_Test/test3.jpg appartient sûrement aux /Bombay avec 81.53% de sûreté
L'Image ./Image_Test/test4.jpg appartient sûrement aux /Bombay avec 99.62% de sûreté
L'Image ./Image_Test/test5.jpg appartient sûrement aux /Bombay avec 99.93% de sûreté
L'Image ./Image_Test/test6.jpg appartient sûrement aux /Maine_Coon avec 44.61% de sûreté
L'Image ./Image_Test/test7.jpg appartient sûrement aux /Persian avec 46.41% de sûreté
L'Image ./Image_Test/test8.jpg appartient sûrement aux /Maine_Coon avec 98.93% de sûreté
L'Image ./Image_Test/test9.jpg appartient sûrement aux /Ragdoll avec 25.93% de sûreté
|
SampleCode/S+P_Week_4_Lesson_1.ipynb | ###Markdown
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 30
train_set = windowed_dataset(x_train, window_size, batch_size=128, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
#batch_size = 16
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=3,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset,epochs=500)
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
###Output
_____no_output_____ |
microsoft/DEV288/Module1/IntroToClassicalNlp.ipynb | ###Markdown
Introduction to Classical Natural Language Processing This notebook is a hands-on introduction to Classical NLP, that is, non-deep learning techniques of NLP. It is designed to be used with the edX course on Natural Language Processing, from Microsoft. The topics covered align with the NLP tasks related to the various stages of the NLP pipeline: text processing, text exploration, building features, and application level tasks. 1. Introduction ** 1.1 NLTK Setup ** - NLTK is included with the Anaconda Distribution of Python, or can be downloaded directly from nltk.org. - Once NLTK is installed, the text data files (corpora) should be downloaded. See the following cell to start the download.
###Code
import nltk
# uncomment the line below to download NLTK resources the first time NLTK is used and RUN this cell.
# when the "NLTK Downloader" dialog appears (takes 10-20 seconds), click on the "download" button
#nltk.download()
###Output
_____no_output_____
###Markdown
** 1.2 Crash Course in Regular Expressions **If you are new to using regular expressions, or would like a quick refresher, you can study the examplesand resulting output in the code cell below.Here is a cheat sheet for the SEARCH BASICS (code examples follow below): Operator Meaning Example Example meaning + one or more a+ look for 1 or more "a" characters * zero or more a* look for 0 or more "a" characters ? optional a? look for 0 or 1 "a" characters [] choose 1 [abc] look for "a" or "b" or "c" [-] range [a-z] look for any character between "a" and "z" [^] not [^a] look for character that is not "a" () grouping (a-z)+ look for one of more occurences of chars between "a" and "z" (|) or operator (ey|ax) look for strings "ey" or "ax" ab follow ab look for character "a" followed by character "b" ^ start ^a look for character "a" at start of string/line $ end a$ look for character "a" at end of string/line \s whitespace \sa look for whitespace character followed by "a" . any character a.b look for "a" followed by any char followed by "b" Common Uses: - re.search finds first matching object - re.findall returns all matching objects - re.sub replaces matches with replacement string
###Code
import re
# search for single char
re.search(r"x", "this is an extra helping")
# search for single char
re.search(r"x", "this is an extra helping").group(0) # gives easier-to-read output
# find all occurences of any character between "a" and "z"
re.findall(r"[a-z]", "$34.33 cash.")
# find all occurences of either "name:" or "phone:"
re.findall(r"(name|phone):", "My name: Joe, my phone: (312)555-1212")
# find "lion", "lions" or "Lion", or "Lions"
re.findall(r"([Ll]ion)s?", "Give it to the Lions or the lion.")
# replace allll lowercase letters with "x"
re.sub("[a-z]", "x", "Hey. I know this regex stuff...")
###Output
_____no_output_____
###Markdown
2. Text Processing This section introduces some of the tasks and techniques used to acquire, clean, and normalize the text data. ** 2.1 Data Acquisition **Issues: - how do I find the data I need? - is it already in digital form, or will it need OCR? - how much will it cost? - will it be updated/expanded over time? More costs? - (if CUSTOMER DATA), do I have the legal / privacy rights needed to use the data in the way I need for my application? - do I have the safeguards needed to securely store the data?
###Code
import nltk
# shows how to access one of the gutenberg books included in NLTK
print("gutenberg book ids=", nltk.corpus.gutenberg.fileids())
# load words from "Alice in Wonderland"
alice = nltk.corpus.gutenberg.words("carroll-alice.txt")
print("len(alice)=", len(alice))
print(alice[:100])
# load words from "Monty Python and the Holy Grail"
grail = nltk.corpus.webtext.words("grail.txt")
print("len(grail)=", len(grail))
print(grail[:100])
###Output
len(grail)= 16967
['SCENE', '1', ':', '[', 'wind', ']', '[', 'clop', 'clop', 'clop', ']', 'KING', 'ARTHUR', ':', 'Whoa', 'there', '!', '[', 'clop', 'clop', 'clop', ']', 'SOLDIER', '#', '1', ':', 'Halt', '!', 'Who', 'goes', 'there', '?', 'ARTHUR', ':', 'It', 'is', 'I', ',', 'Arthur', ',', 'son', 'of', 'Uther', 'Pendragon', ',', 'from', 'the', 'castle', 'of', 'Camelot', '.', 'King', 'of', 'the', 'Britons', ',', 'defeator', 'of', 'the', 'Saxons', ',', 'sovereign', 'of', 'all', 'England', '!', 'SOLDIER', '#', '1', ':', 'Pull', 'the', 'other', 'one', '!', 'ARTHUR', ':', 'I', 'am', ',', '...', 'and', 'this', 'is', 'my', 'trusty', 'servant', 'Patsy', '.', 'We', 'have', 'ridden', 'the', 'length', 'and', 'breadth', 'of', 'the', 'land', 'in']
###Markdown
** 2.2 Plain Text Extraction **If your text data lives in a non-plain text file (WORD, POWERPOINT, PDF, HTML, etc.), you will need to use a “filter” to extract the plain text from the file.Python has a number of libraries to extract plain text from popular file formats, but they are take searching and supporting code to use. ** 2.3 Word and Sentence Segmentation (Tokenization) **Word Segmentation Issues: - Some languages don’t white space characters - Words with hyphens or apostrophes (Who’s at the drive-in?) - Numbers, currency, percentages, dates, times (04/01/2018, $55,000.00) - Ellipses, special characters Sentence Segmentation Issues: - Quoted speech within a sentence - Abbreviations with periods (The Ph.D. was D.O.A) Tokenization Techniques - Perl script (50 lines) with RegEx (Grefenstette, 1999) - maxmatch Algorithm: themanranafterit -> the man ran after it thetabledownthere -> theta bled own there (Palmer, 2000)
###Code
# code example: simple version of maxmatch algorithm for tokenization (word segmentation)
def tokenize(str, dict):
s = 0
words = []
while (s < len(str)):
found = False
# find biggest word in dict that matches str[s:xxx]
for word in dict:
lw = len(word)
if (str[s:s+lw] == word):
words.append(word)
s += lw
found = True
break
if (not found):
words.append(str[s])
s += 1
print(words)
#return words
# small dictionary of known words, longest words first
dict = ["before", "table", "theta", "after", "where", "there", "bled", "said", "lead", "man", "her", "own", "the", "ran", "it"]
# this algorithm is designed to work with languages that don't have whitespace characters
# so simulate that in our test
tokenize("themanranafterit", dict) # works!
tokenize("thetabledownthere", dict) # fails!
# NLTK example: WORD segmentation
nltk.word_tokenize("the man, he ran after it's $3.23 dog on 03/23/2016.")
# NLTK example: SENTENCE segmentation
nltk.sent_tokenize('The man ran after it. The table down there? Yes, down there!')
###Output
_____no_output_____
###Markdown
** 2.4 Stopword Removal **Stopwords are common words that are "not interesting" for the app/task at hand. Easy part – removing words that appear in list. Tricky part – what to use for stop words? App-dependent. Standard lists, high-frequency words in your text, …
###Code
# code example: simple algorithm for removing stopwords
stoppers = "a is of the this".split()
def removeStopWords(stopWords, txt):
newtxt = ' '.join([word for word in txt.split() if word not in stopWords])
return newtxt
removeStopWords(stoppers, "this is a test of the stop word removal code.")
# NLTK example: removing stopwords
from nltk.corpus import stopwords
stops = stopwords.words("English")
print("len(stops)=", len(stops))
removeStopWords(stops, "this is a test of the stop word removal code.")
###Output
len(stops)= 179
###Markdown
** 2.5 Case Removal **Case removal is part of a larger task called *Text Normalization*, which includes: - case removal - stemming (covered in next section) Goal of Case removal – converting all text to, for example, lower case
###Code
# code example: case removal
str = 'The man ran after it. The table down there? Yes, down there!'
str.lower()
###Output
_____no_output_____
###Markdown
** 2.6 Stemming ** Goal of Stemming: – stripping off endings and other pieces, called AFFIXES – for English, this is prefixes and suffixes. - convert word to its base word, called the LEMMA / STEM (e.g., foxes -> fox) Porter Stemmer - 100+ cascading “rewrite” rules ational -> ate (e.g., relational -> relate) ing -> (e.g., playing -> play) sess -> ss (e.g., grasses -> grass)
###Code
# NLTK example: stemming
def stem_with_porter(words):
porter = nltk.PorterStemmer()
new_words = [porter.stem(w) for w in words]
return new_words
def stem_with_lancaster(words):
porter = nltk.LancasterStemmer()
new_words = [porter.stem(w) for w in words]
return new_words
str = "Please don't unbuckle your seat-belt while I am driving, he said"
print("porter:", stem_with_porter(str.split()))
print()
print("lancaster:", stem_with_lancaster(str.split()))
###Output
porter: ['pleas', "don't", 'unbuckl', 'your', 'seat-belt', 'while', 'I', 'am', 'driving,', 'he', 'said']
lancaster: ['pleas', "don't", 'unbuckl', 'yo', 'seat-belt', 'whil', 'i', 'am', 'driving,', 'he', 'said']
###Markdown
3. Text Exploration ** 3.1 Frequency Analysis ** - Frequency Analysis - Letter - Word - Bigrams - Plots
###Code
# NLTK example: frequence analysis
import nltk
from nltk.corpus import gutenberg
from nltk.probability import FreqDist
# get raw text from "Sense and Sensibility" by Jane Austen
raw = gutenberg.raw("austen-sense.txt")
fd_letters = FreqDist(raw)
words = gutenberg.words("austen-sense.txt")
fd_words = FreqDistist(words)
sas = nltk.Text(words)
# these 2 lines let us size the freq dist plot
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 5))
# frequency plot for letters from SAS
fd_letters.plot(100)
# these 2 lines let us size the freq dist plot
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 5))
# frequency plot for words from SAS
fd_words.plot(50)
###Output
_____no_output_____
###Markdown
** 3.2 Collocations **These are interesting word pairs, usually formed by the most common bigrams. Bigrams are collections of word pairs that occur together in the text.
###Code
# let's look at collocations for our "Sense and Sensibility" text
sas.collocations()
###Output
Colonel Brandon; Sir John; Lady Middleton; Miss Dashwood; every thing;
thousand pounds; dare say; Miss Steeles; said Elinor; Miss Steele;
every body; John Dashwood; great deal; Harley Street; Berkeley Street;
Miss Dashwoods; young man; Combe Magna; every day; next morning
###Markdown
Nice! Now we are getting a feel for the language and subjects of the text. ** 3.3 Long words **Sometimes looking at the long words in a text can be revealing. Let's try it on sas.
###Code
# let's look at long words in the text
longWords = [w for w in set(words) if len(w) > 13]
longWords[:15]
###Output
_____no_output_____
###Markdown
** 3.3 Concordance Views **Concordance views, also called Keywords in Context (KWIC), show the specifed word with the words that surround it in text. These views can be helpful in understaning how the words are being used in the text.
###Code
# Let's try looking at some of these recent words in a Concordance view
sas.concordance("affectionately")
print()
sas.concordance("correspondence")
print()
sas.concordance("dare")
print()
###Output
Displaying 2 of 2 matches:
before . She took them all most affectionately by the hand , and expressed gre
ed , took her hand , kissed her affectionately several times , and then gave w
Displaying 4 of 4 matches:
ould not be maintained if their correspondence were to pass through Sir John '
ve been Edward ' s gift ; but a correspondence between them by letter , could
she had no doubt , and of their correspondence she was not astonished to hear
e of Edward afforded her by the correspondence , for his name was not even men
Displaying 25 of 36 matches:
not know what he was talking of , I dare say ; ten to one but he was light -
l . The assistance he thought of , I dare say , was only such as might be reas
g , if I have plenty of money , as I dare say I shall , we may think about bui
, you will make conquests enough , I dare say , one way or other . Poor Brando
e . He is the curate of the parish I dare say ." " No , THAT he is not . He is
m . He was afraid of catching cold I dare say , and invented this trick for ge
ve it in my power to return , that I dare not engage for it at all ." " Oh ! h
and as like him as she can stare . I dare say the Colonel will leave her all h
t Miss Williams and , by the bye , I dare say it is , because he looked so con
" are of such a nature -- that -- I dare not flatter myself "-- He stopt . Mr
nd MY wealth are very much alike , I dare say ; and without them , as the worl
unites beauty with utility -- and I dare say it is a picturesque one too , be
, you know . Not above ten miles , I dare say ." " Much nearer thirty ," said
h my uncle at Weymouth . However , I dare say we should have seen a great deal
t if mama had not objected to it , I dare say he would have liked it of all th
ill think my question an odd one , I dare say ," said Lucy to her one day , as
an inquiry into her character ." " I dare say you are , and I am sure I do not
ave had no idea of it before ; for I dare say he never dropped the smallest hi
or she would never approve of it , I dare say . I shall have no fortune , and
to Elinor . " You know his hand , I dare say , a charming one it is ; but tha
o well as usual .-- He was tired , I dare say , for he had just filled the she
talking of their favourite beaux , I dare say ." " No sister ," cried Lucy , "
ng significantly round at them , " I dare say Lucy ' s beau is quite as modest
, for you are a party concerned . I dare say you have seen enough of Edward t
h pleasure to meet you there ! But I dare say you will go for all that . To be
###Markdown
** 3.4 Other Exploration Task/Views **
###Code
# look at words similiar to a word
sas.similar("affection")
# these 2 lines let us size the freq dist plot
import matplotlib.pyplot as plt
plt.figure(figsize=(15, 4))
# look at words as they appear over time in the book/document
sas.dispersion_plot(["sense", "love", "heart", "listen", "man", "woman"])
###Output
_____no_output_____
###Markdown
4. Building Features ** 4.1 Bag-of-Words (BOW) **One of the simplest features when dealing with multiple texts (like multiple documents, or multiple sentences within a document), is called Bag-of-Words. It builds a vocabular from each word in the set of texts, and then a feature for each word, indicate the presence/absence of that word within each text. Sometimes, the count of the word is used in place of a presence flag.A common way to represent a set of features like this is called a One-Hot vector. For example, lets say our vocabular from our set of texts is: today, here, I, a, fine, sun, moon, bird, sawThe sentence we want to build a BOW for is: I saw a bird today. Using a 1/0 for each word in the vocabulary, our BOW encoded as a one-hot vector would be: 1 0 1 1 0 0 1 1 ** 4.2 N-Grams **N-grams represent the sequence of N words that are found in a text. They are commonly used as a model of the text language since they represent the frequence of words/phrases appearing in the text. Common types of N-grams:unigrams - these are the set of single words appearing in the textbigrams - these are the set of word pairs, like "good day" or "big deal", from the texttrigrams - these are the set of word triples, like "really good food", from the textTo build bigrams for a text, you need to extract all possible word pairs from the text and count how many times each pair occurs. Then, you can use the top N words (1000), or the top percent (50%) as your language model. ** 4.3 Morphological Parsing ****Goal**: convert input word into its morphological parts. For example: “geese” would return goose + N + PL Morphological Parsing:geese -> goose + N + PL caught -> catch + V + PastPart Morphological parsing is related to stemming, but instead of mapping the word variants to a stem word, it labels the stem word and its affixes.Morphological parsing, even for English, is quite involved ** 4.4 TD/IDF **TD/IDF stands for Term Document Inverse Document Frequency. "Term" here can be thought of as a word. This is a measure of the relative importance of a word within a document, in the context of multiple documents. We start with the TD part - this is simply a normalized frequency of the word in the document: - (word count in document) / (total words in document) The IDF is a weighting of the uniquess of the word across all of the documents. Here is the complete formula of TD/IDF: - td_idf(t,d) = wc(t,d)/wc(d) / dc(t)/dc() where: - wc(t,d) = of occurrences of term t in doc d - wc(d) = of words in doc d - dc(t) = of docs that contain at least 1 occurrence of term t - dc() = of docs in collection ** 4.5 Word Sense Disambiguation (WSD) **Related to POS tagging, WSD is use to distingish between difference senses of a word. Each sense of the worduses the same POS tag, but means something different. For example: - she served the King - she served the ball - he took his money to the bank - he took his canoe to the bank - I play bass guitar - I fish for bass ** 4.6 Anaphora Resolution ** \Examples: - Sam and Bill left with the toys. They were later found.Who does "they" refer to in the above sentence? Sam and Bill, or the toys? ** 4.7 Part-of-speech (POS) Tagging ** - Verb, noun, adjective, etc. - Simple tag set: 19 word classes “They refuse to permit us to obtain a refuse permit”What tagset to use? - Brown Corpus (87 tags) - C5 tagset (61 tags) - Penn Treebank (45 tags) Types of Taggers - Rule-based (e.g., regular expression) - Lookup (Unigram) - N-Gram - Hybrid and Backoff - Brill Tagger (learns rules) - HMM Tagger ** HMM Tagger **Previosly, we introduce Hidden Markov Models with a weather example. Here, we will show an example of how we can use a HMM to create a POS tagger.
###Code
# before building the HMM Tagger, let's warm up with implementing our HMM weather example here
# states
start = -1; cold = 0; normal = 1; hot = 2; stateCount = 3
stateNames = ["cold", "normal", "hot"]
# outputs
hotChoc = 0; soda=1; iceCream = 2
timeSteps = 7
# state transition probabilities
trans = {}
trans[(start, cold)] = .1
trans[(start, normal)] = .8
trans[(start, hot)] = .1
trans[(cold, cold)] = .7
trans[(cold, normal)] = .1
trans[(cold, hot)] = .2
trans[(normal, cold)] = .3
trans[(normal, normal)] = .4
trans[(normal, hot)] = .3
trans[(hot, cold)] = .2
trans[(hot, normal)] = .4
trans[(hot, hot)] = .4
# state outputs
output = {}
output[(cold, hotChoc)] = .7
output[(cold, soda)] = .3
output[(cold, iceCream)] = 0
output[(normal, hotChoc)] = .1
output[(normal, soda)] = .7
output[(normal, iceCream)] = .2
output[(hot, hotChoc)] = 0
output[(hot, soda)] = .6
output[(hot, iceCream)] = .4
diary = [soda, soda, hotChoc, iceCream, soda, soda, iceCream]
# manage cell values and back pointers
cells = {}
backStates = {}
def computeMaxPrev(t, sNext):
maxValue = 0
maxState = 0
for s in range(stateCount):
value = cells[t, s] * trans[(s, sNext)]
if (s == 0 or value > maxValue):
maxValue = value
maxState = s
return (maxValue, maxState)
def viterbi(trans, output, diary):
# special handling for t=0 which have no prior states)
for s in range(stateCount):
cells[(0, s)] = trans[(start, s)] * output[(s, diary[0])]
# handle rest of time steps
for t in range(1, timeSteps):
for s in range(stateCount):
maxValue, maxState = computeMaxPrev(t-1, s)
backStates[(t,s)] = maxState
cells[(t, s)] = maxValue * output[(s, diary[t])]
#print("t=", t, "s=", s, "maxValue=", maxValue, "maxState=", maxState, "output=", output[(s, diary[t])], "equals=", cells[(t, s)])
# walk thru cells backwards to get most probable path
path = []
for tt in range(timeSteps):
t = timeSteps - tt - 1 # step t backwards over timesteps
maxValue = 0
maxState = 0
for s in range(stateCount):
value = cells[t, s]
if (s == 0 or value > maxValue):
maxValue = value
maxState = s
path.insert(0, maxState)
return path
# test our algorithm on the weather problem
path = viterbi(trans, output, diary)
print("Weather by days:")
for i in range(timeSteps):
state = path[i]
print(" day=", i+1, stateNames[state])
###Output
Weather by days:
day= 1 normal
day= 2 normal
day= 3 cold
day= 4 hot
day= 5 normal
day= 6 normal
day= 7 hot
###Markdown
** HMM Tagger Overview**We are going to use a Hidden Markov Model to help us assign Part-of-Speech tags (like noun, verb, adjective, etc.) to words in a sentence. We treat the human author of the sentence as moving between different meaning states (POS tags) as they compose the sentence. Those state are hidden from us, but we observe the words of the sentence (the output of the meaning states).In our example here, we will use 4 POS tags from the 87 tag Brown corpus: - VB (verb, base form) - TO (infinitive marker) - NN (common singular noun) - PPSS (other nominative pronoun)We are given the state-to-state transition probabilities and the state-output probabilities (see next code cell). We are also given the sentence to decode: "I WANT TO RACE".
###Code
# OK, here is our HMM POS Tagger for this example
# states
start = -1; VB = 0; TO = 1; NN = 2; PPSS = 3; stateCount = 4
stateNames = ["VB", "TO", "NN", "PPSS"]
# outputs
I = 0; WANT = 1; To = 2; RACE=3
timeSteps = 4
# state transition probabilities
trans = {}
trans[(start, VB)] = .19
trans[(start, TO)] = .0043
trans[(start, NN)] = .041
trans[(start, PPSS)] = .067
trans[(VB, VB)] = .0038
trans[(VB, TO)] = .035
trans[(VB, NN)] = .047
trans[(VB, PPSS)] = .0070
trans[(TO, VB)] = .83
trans[(TO, TO)] = 0
trans[(TO, NN)] = .00047
trans[(TO, PPSS)] = 0
trans[(NN, VB)] = .0040
trans[(NN, TO)] = .016
trans[(NN, NN)] = .087
trans[(NN, PPSS)] = .0045
trans[(PPSS, VB)] = .23
trans[(PPSS, TO)] = .00079
trans[(PPSS, NN)] = .0012
trans[(PPSS, PPSS)] = .00014
# state outputs
output = {}
output[(VB, I)] = 0
output[(VB, WANT)] = .0093
output[(VB, To)] = 0
output[(VB, RACE)] = .00012
output[(TO, I)] = 0
output[(TO, WANT)] = 0
output[(TO, To)] = .99
output[(TO, RACE)] = 0
output[(NN, I)] = 0
output[(NN, WANT)] = .000054
output[(NN, To)] = 0
output[(NN, RACE)] = .00057
output[(PPSS, I)] = .37
output[(PPSS, WANT)] = 0
output[(PPSS, To)] = 0
output[(PPSS, RACE)] = 0
sentence = [I, WANT, To, RACE]
words = ["I", "WANT", "TO", "RACE"]
# manage cell values and back pointers
cells = {}
backStates = {}
def computeMaxPrev(t, sNext):
maxValue = 0
maxState = 0
for s in range(stateCount):
value = cells[t, s] * trans[(s, sNext)]
if (s == 0 or value > maxValue):
maxValue = value
maxState = s
return (maxValue, maxState)
def viterbi(trans, output, sentence):
# special handling for t=0 which have no prior states)
for s in range(stateCount):
cells[(0, s)] = trans[(start, s)] * output[(s, sentence[0])]
# handle rest of time steps
for t in range(1, timeSteps):
for s in range(stateCount):
maxValue, maxState = computeMaxPrev(t-1, s)
backStates[(t,s)] = maxState
cells[(t, s)] = maxValue * output[(s, sentence[t])]
#print("t=", t, "s=", s, "maxValue=", maxValue, "maxState=", maxState, "output=", output[(s, sentence[t])], "equals=", cells[(t, s)])
# walk thru cells backwards to get most probable path
path = []
for tt in range(timeSteps):
t = timeSteps - tt - 1 # step t backwards over timesteps
maxValue = 0
maxState = 0
for s in range(stateCount):
value = cells[t, s]
if (s == 0 or value > maxValue):
maxValue = value
maxState = s
path.insert(0, maxState)
return path
# test our algorithm on the POS TAG data
path = viterbi(trans, output, sentence)
print("Tagged Sentence:")
for i in range(timeSteps):
state = path[i]
print(" word=", words[i], "\ttag=", stateNames[state])
# Here is an example of using the NLTK POS tagger
import nltk
nltk.pos_tag("they refuse to permit us to obtain the refuse permit".split())
# POS tagging with supervised learning, using word suffix parts as features
import nltk
# start by finding the most common 1, 2, and 3 character suffixes of words (using Brown corpus of 1.1 million words)
from nltk.corpus import brown
fd = nltk.FreqDist() # create an empty one that we will count with
for word in brown.words():
wl = word.lower()
fd[wl[-1:]] += 1
fd[wl[-2:]] += 1
fd[wl[-3:]] += 1
topSuffixes = [ key for (key,value) in fd.most_common(30)]
print(topSuffixes[:40])
def pos_features(word):
features = {}
for suffix in topSuffixes:
features[suffix] = word.lower().endswith(suffix)
return features
#pos_features("table")
tagWords = brown.tagged_words(categories="news")
data = [(pos_features(word), tag) for (word,tag) in tagWords]
print("len(data)=", len(data))
dataCount = len(data)
trainCount = int(.8*dataCount)
trainData = data[:trainCount]
testData = data[trainCount:]
dtree = nltk.DecisionTreeClassifier.train(trainData)
#dtree = nltk.NaiveBayesClassifier.train(trainData)
print("train accuracy=", nltk.classify.accuracy(dtree, trainData))
print("test accuracy=", nltk.classify.accuracy(dtree, testData))
print(dtree.classify(pos_features("cats")))
print(dtree.classify(pos_features("house")))
print(dtree.pseudocode(depth=4))
###Output
['e', ',', '.', 's', 'd', 't', 'he', 'n', 'a', 'of', 'the', 'y', 'r', 'to', 'in', 'f', 'o', 'ed', 'nd', 'is', 'on', 'l', 'g', 'and', 'ng', 'er', 'as', 'ing', 'h', 'at']
len(data)= 100554
train accuracy= 0.48975050656986935
test accuracy= 0.48466013624384663
NNS
NN
if the == False:
if , == False:
if s == False:
if . == False: return '``'
if . == True: return '.'
if s == True:
if is == False: return 'NNS'
if is == True: return 'BEZ'
if , == True: return ','
if the == True: return 'AT'
###Markdown
5. Classical NLP Applications ** 5.1 Name Gender Classifier **
###Code
# code to build a classifier to classify names as male or female
# demonstrates the basics of feature extraction and model building
names = [(name, 'male') for name in nltk.corpus.names.words("male.txt")]
names += [(name, 'female') for name in nltk.corpus.names.words("female.txt")]
def extract_gender_features(name):
name = name.lower()
features = {}
features["suffix"] = name[-1:]
features["suffix2"] = name[-2:] if len(name) > 1 else name[0]
features["suffix3"] = name[-3:] if len(name) > 2 else name[0]
#features["suffix4"] = name[-4:] if len(name) > 3 else name[0]
#features["suffix5"] = name[-5:] if len(name) > 4 else name[0]
#features["suffix6"] = name[-6:] if len(name) > 5 else name[0]
features["prefix"] = name[:1]
features["prefix2"] = name[:2] if len(name) > 1 else name[0]
features["prefix3"] = name[:3] if len(name) > 2 else name[0]
features["prefix4"] = name[:4] if len(name) > 3 else name[0]
features["prefix5"] = name[:5] if len(name) > 4 else name[0]
#features["wordLen"] = len(name)
#for letter in "abcdefghijklmnopqrstuvwyxz":
# features[letter + "-count"] = name.count(letter)
return features
data = [(extract_gender_features(name), gender) for (name,gender) in names]
import random
random.shuffle(data)
#print(data[:10])
#print()
#print(data[-10:])
dataCount = len(data)
trainCount = int(.8*dataCount)
trainData = data[:trainCount]
testData = data[trainCount:]
bayes = nltk.NaiveBayesClassifier.train(trainData)
def classify(name):
label = bayes.classify(extract_gender_features(name))
print("name=", name, "classifed as=", label)
print("trainData accuracy=", nltk.classify.accuracy(bayes, trainData))
print("testData accuracy=", nltk.classify.accuracy(bayes, testData))
bayes.show_most_informative_features(25)
# print gender classifier errors so we can design new features to identify the cases
errors = []
for (name,label) in names:
if bayes.classify(extract_gender_features(name)) != label:
errors.append({"name": name, "label": label})
#errors
###Output
_____no_output_____
###Markdown
** 5.2 Sentiment Analysis **
###Code
# movie reviews / sentiment analysis - part #1
from nltk.corpus import movie_reviews as reviews
import random
docs = [(list(reviews.words(id)), cat) for cat in reviews.categories() for id in reviews.fileids(cat)]
random.shuffle(docs)
#print([ (len(d[0]), d[0][:2], d[1]) for d in docs[:10]])
fd = nltk.FreqDist(word.lower() for word in reviews.words())
topKeys = [ key for (key,value) in fd.most_common(2000)]
# movie reviews sentiment analysis - part #2
import nltk
def review_features(doc):
docSet = set(doc)
features = {}
for word in topKeys:
features[word] = (word in docSet)
return features
#review_features(reviews.words("pos/cv957_8737.txt"))
data = [(review_features(doc), label) for (doc,label) in docs]
dataCount = len(data)
trainCount = int(.8*dataCount)
trainData = data[:trainCount]
testData = data[trainCount:]
bayes2 = nltk.NaiveBayesClassifier.train(trainData)
print("train accuracy=", nltk.classify.accuracy(bayes2, trainData))
print("test accuracy=", nltk.classify.accuracy(bayes2, testData))
bayes2.show_most_informative_features(20)
###Output
train accuracy= 0.85875
test accuracy= 0.755
Most Informative Features
outstanding = True pos : neg = 13.6 : 1.0
seagal = True neg : pos = 11.0 : 1.0
mulan = True pos : neg = 7.7 : 1.0
wonderfully = True pos : neg = 7.2 : 1.0
poorly = True neg : pos = 6.5 : 1.0
awful = True neg : pos = 5.6 : 1.0
ridiculous = True neg : pos = 5.5 : 1.0
waste = True neg : pos = 5.4 : 1.0
wasted = True neg : pos = 5.2 : 1.0
dull = True neg : pos = 5.1 : 1.0
worst = True neg : pos = 5.0 : 1.0
lame = True neg : pos = 5.0 : 1.0
unfunny = True neg : pos = 4.8 : 1.0
fantastic = True pos : neg = 4.8 : 1.0
damon = True pos : neg = 4.8 : 1.0
jedi = True pos : neg = 4.7 : 1.0
era = True pos : neg = 4.5 : 1.0
allows = True pos : neg = 4.1 : 1.0
boring = True neg : pos = 4.1 : 1.0
mess = True neg : pos = 3.9 : 1.0
###Markdown
** 5.3 Named Entity Recognition (NER) **Popular Named Entity Types: - ORGANIZATION - PERSON - LOCATION - GPE - DATE Named Entity Extraction Techniques: - Chunking (tag pattern to group) - Chinking (tag pattern to omit) - Nested chunks (recursion) - Hand-crafted rules - Rules learned from data
###Code
# Named Entity Regcognition (NER)
# processes sentences and produces (entity, relation, entity) triples!
import nltk
# first, process the document by separating the text into sentences, then words within sentences, then tag words by sentence
def preprocess(doc):
sents = nltk.sent_tokenize(doc)
sents2 = [nltk.word_tokenize(sent) for sent in sents]
sents3 = [nltk.pos_tag(sent) for sent in sents2]
# we are going to use a technique called CHUNKING where we label sequences of POS tags as a high level tag, like a
# noun phrase (NP).
# here we test our idea with a simple sentence, and a grammar for detectecting NP. The grammar:
# <NP> ::= [ <DT> ] [ <JJ list> ] <NN>
tagged_sent = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"), ("dog", "NN"), ("barked", "VBD"), ("at", "IN"),
("the", "DT"), ("cat", "NN")]
#np_grammar = "NP: {<DT>?<JJ>*<NN>}"
#np_grammar = "NP: {<DT>?<JJ.*>*<NN.*>*}"
np_grammar = r"""
NP:
{<DT|PP\$>?<JJ>*<NN>}
{<NPP>+}
"""
parser = nltk.RegexpParser(np_grammar)
result = parser.parse(tagged_sent)
print(result.__repr__())
###Output
Tree('S', [Tree('NP', [('the', 'DT'), ('little', 'JJ'), ('yellow', 'JJ'), ('dog', 'NN')]), ('barked', 'VBD'), ('at', 'IN'), Tree('NP', [('the', 'DT'), ('cat', 'NN')])])
|
Kinetics/estimate_kinetics.ipynb | ###Markdown
Kinetics Estimator
###Code
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display
import rmgpy
from rmgpy.data.kinetics.family import TemplateReaction
from rmgpy.data.kinetics.depository import DepositoryReaction
from rmgpy.data.rmg import RMGDatabase
from rmgpy.kinetics.kineticsdata import KineticsData
from rmgpy.molecule.molecule import Molecule
from rmgpy.reaction import Reaction
from rmgpy.rmg.model import CoreEdgeReactionModel
from rmgpy.species import Species
from rmgpy.thermo.thermoengine import submit
###Output
_____no_output_____
###Markdown
Set reaction families of interest
###Code
families = ['R_Addition_MultipleBond']
###Output
_____no_output_____
###Markdown
Load database
###Code
database_path = rmgpy.settings['database.directory']
database = RMGDatabase()
database.load(
path = database_path,
thermo_libraries = ['primaryThermoLibrary'],
reaction_libraries = [],
seed_mechanisms = [],
kinetics_families = families,
)
# Load training data
for family in database.kinetics.families.values():
family.add_rules_from_training(thermo_database=database.thermo)
family.fill_rules_by_averaging_up(verbose=True)
###Output
_____no_output_____
###Markdown
Set reactants and products
###Code
# If you only want to specify reactants, just run this block
reactants = [
Species(smiles='c1ccccc1C=C'),
Species(smiles='[CH3]')
]
products = None
for r in reactants:
submit(r)
display(r)
# If you also want to specify products, run this block as well
products = [
Species(smiles="c1ccccc1[CH]CC"),
]
for p in products:
submit(p)
display(p)
###Output
_____no_output_____
###Markdown
Generate reactions
###Code
reaction_list = database.kinetics.generate_reactions_from_families(
reactants,
products=products,
only_families=None,
resonance=True,
)
###Output
_____no_output_____
###Markdown
Process reactions and apply kinetics
###Code
cerm = CoreEdgeReactionModel()
for rxn0 in reaction_list:
rxn1 = cerm.make_new_reaction(rxn0)
for rxn0 in cerm.new_reaction_list:
cerm.apply_kinetics_to_reaction(rxn0)
if isinstance(rxn0.kinetics, KineticsData):
rxn0.kinetics = reaction.kinetics.to_arrhenius()
if isinstance(rxn0, (TemplateReaction, DepositoryReaction)):
rxn0.fix_barrier_height()
###Output
_____no_output_____
###Markdown
Display results
###Code
pressure = 1e5 # Pa
temperature = np.linspace(298, 2000, 50)
def plot_kinetics(reaction):
fig = plt.figure()
kinetics = reaction.kinetics
conversion_factor = kinetics.A.get_conversion_factor_from_si_to_cm_mol_s()
if len(reaction.reactants) == 1:
kunits = 's^-1'
elif len(reaction.reactants) == 2:
kunits = 'cm^3/(mol*s)'
else:
kunits = '???'
k = []
for t in temperature:
# Rates are returned in SI units by default
k.append(conversion_factor * kinetics.get_rate_coefficient(t, pressure))
x = 1000 / temperature
plt.semilogy(x, k)
plt.xlabel('1000/T (K)')
plt.ylabel('k [{0}]'.format(kunits))
plt.legend([str(reaction)], loc=8, bbox_to_anchor=(0.5, 1.02))
plt.show()
for rxn0 in cerm.new_reaction_list:
display(rxn0)
print(rxn0.kinetics)
plot_kinetics(rxn0)
###Output
_____no_output_____
###Markdown
Calculate equilibrium constants and reverse rate coefficients
###Code
# Pick a single reaction
selected_rxn = cerm.new_reaction_list[0]
display(selected_rxn)
keq = selected_rxn.get_equilibrium_constants(temperature)
fig = plt.figure()
plt.semilogy(temperature, keq)
plt.semilogy([298, 2000], [1, 1], 'r--')
plt.xlabel('Temperature (K)')
plt.ylabel('Keq')
plt.legend([str(selected_rxn)], loc=8, bbox_to_anchor=(0.5, 1.02))
plt.show()
reverse_rxn = Reaction(
reactants=selected_rxn.products,
products=selected_rxn.reactants,
kinetics=selected_rxn.generate_reverse_rate_coefficient()
)
display(reverse_rxn)
print(reverse_rxn.kinetics)
plot_kinetics(reverse_rxn)
###Output
_____no_output_____ |
tutorials/streamlit_notebooks/healthcare/RE_CLINICAL.ipynb | ###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/RE_CLINICAL.ipynb) **Detect causality between symptoms and treatment** To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens.Otherwise, you can look at the example outputs at the bottom of the notebook. 1. Colab Setup Import license keys
###Code
import os
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
%%capture
for k,v in license_keys.items():
%set_env $k=$v
!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh
!bash jsl_colab_setup.sh
# Install Spark NLP Display for visualization
!pip install --ignore-installed spark-nlp-display
###Output
_____no_output_____
###Markdown
Import dependencies into Python
###Code
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from tabulate import tabulate
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
###Output
_____no_output_____
###Markdown
Start the Spark session
###Code
spark = sparknlp_jsl.start(license_keys['SECRET'])
# manually start session
# params = {"spark.driver.memory" : "16G",
# "spark.kryoserializer.buffer.max" : "2000M",
# "spark.driver.maxResultSize" : "2000M"}
# spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
###Output
_____no_output_____
###Markdown
2. Select the Relation Extraction model and construct the pipeline Select the models:* Clinical Relation Extraction models: **re_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# Change this to the model you want to use and re-run the cells below.
RE_MODEL_NAME = "re_clinical"
NER_MODEL_NAME = "ner_clinical"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
clinical_ner_model = MedicalNerModel.pretrained(NER_MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentences", "tokens", "embeddings"])\
.setOutputCol("clinical_ner_tags")
clinical_ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "clinical_ner_tags"])\
.setOutputCol("clinical_ner_chunks")
clinical_re_Model = RelationExtractionModel()\
.pretrained(RE_MODEL_NAME, 'en', 'clinical/models')\
.setInputCols(["embeddings", "pos_tags", "clinical_ner_chunks", "dependencies"])\
.setOutputCol("clinical_relations")\
.setMaxSyntacticDistance(4)
#.setRelationPairs()#["problem-test", "problem-treatment"]) # we can set the possible relation pairs (if not set, all the relations will be calculated)
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
pos_tagger,
dependency_parser,
embeddings,
clinical_ner_model,
clinical_ner_chunker,
clinical_re_Model])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
pos_clinical download started this may take some time.
Approximate size to download 1.5 MB
[OK!]
dependency_conllu download started this may take some time.
Approximate size to download 16.7 MB
[OK!]
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical download started this may take some time.
Approximate size to download 13.9 MB
[OK!]
re_clinical download started this may take some time.
Approximate size to download 6 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""She is followed by Dr. X in our office and has a history of severe tricuspid regurgitation with mild elevation and PA pressure. On 05/12/08, preserved left and right ventricular systolic function, aortic sclerosis with apparent mild aortic stenosis, and bi-atrial enlargement. She has previously had a Persantine Myoview nuclear rest-stress test scan completed at ABCD Medical Center in 07/06 that was negative. She has had significant mitral valve regurgitation in the past being moderate, but on the most recent echocardiogram on 05/12/08, that was not felt to be significant. She has a history of hypertension and EKGs in our office show normal sinus rhythm with frequent APCs versus wandering atrial pacemaker. She does have a history of significant hypertension in the past. She has had dizzy spells and denies clearly any true syncope. She has had bradycardia in the past from beta-blocker therapy."""
]
###Output
_____no_output_____
###Markdown
4. Run the pipeline
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
###Output
_____no_output_____
###Markdown
5. Visualize
###Code
from sparknlp_display import RelationExtractionVisualizer
vis = RelationExtractionVisualizer()
vis.display(light_result[0], 'clinical_relations', show_relations=True) # default show_relations: True
###Output
/usr/local/lib/python3.7/dist-packages/sparknlp_display/relation_extraction.py:354: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
relation_coordinates = np.array(relation_coordinates)
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/RE_CLINICAL.ipynb) **Detect causality between symptoms and treatment** To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens.Otherwise, you can look at the example outputs at the bottom of the notebook. 1. Colab Setup Import license keys
###Code
import json
import os
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
# Defining license key-value pairs as local variables
locals().update(license_keys)
# Adding license key-value pairs to environment variables
os.environ.update(license_keys)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
# Installing pyspark and spark-nlp
! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION
# Installing Spark NLP Healthcare
! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET
# Installing Spark NLP Display Library for visualization
! pip install -q spark-nlp-display
###Output
_____no_output_____
###Markdown
Import dependencies into Python
###Code
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from tabulate import tabulate
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
###Output
_____no_output_____
###Markdown
Start the Spark session
###Code
spark = sparknlp_jsl.start(license_keys['SECRET'])
# manually start session
# params = {"spark.driver.memory" : "16G",
# "spark.kryoserializer.buffer.max" : "2000M",
# "spark.driver.maxResultSize" : "2000M"}
# spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
###Output
_____no_output_____
###Markdown
2. Select the Relation Extraction model and construct the pipeline Select the models:* Clinical Relation Extraction models: **re_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# Change this to the model you want to use and re-run the cells below.
RE_MODEL_NAME = "re_clinical"
NER_MODEL_NAME = "ner_clinical"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
clinical_ner_model = MedicalNerModel.pretrained(NER_MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentences", "tokens", "embeddings"])\
.setOutputCol("clinical_ner_tags")
clinical_ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "clinical_ner_tags"])\
.setOutputCol("clinical_ner_chunks")
clinical_re_Model = RelationExtractionModel()\
.pretrained(RE_MODEL_NAME, 'en', 'clinical/models')\
.setInputCols(["embeddings", "pos_tags", "clinical_ner_chunks", "dependencies"])\
.setOutputCol("clinical_relations")\
.setMaxSyntacticDistance(4)
#.setRelationPairs()#["problem-test", "problem-treatment"]) # we can set the possible relation pairs (if not set, all the relations will be calculated)
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
pos_tagger,
dependency_parser,
embeddings,
clinical_ner_model,
clinical_ner_chunker,
clinical_re_Model])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
pos_clinical download started this may take some time.
Approximate size to download 1.5 MB
[OK!]
dependency_conllu download started this may take some time.
Approximate size to download 16.7 MB
[OK!]
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical download started this may take some time.
Approximate size to download 13.7 MB
[OK!]
re_clinical download started this may take some time.
Approximate size to download 6 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""She is followed by Dr. X in our office and has a history of severe tricuspid regurgitation with mild elevation and PA pressure. On 05/12/08, preserved left and right ventricular systolic function, aortic sclerosis with apparent mild aortic stenosis, and bi-atrial enlargement. She has previously had a Persantine Myoview nuclear rest-stress test scan completed at ABCD Medical Center in 07/06 that was negative. She has had significant mitral valve regurgitation in the past being moderate, but on the most recent echocardiogram on 05/12/08, that was not felt to be significant. She has a history of hypertension and EKGs in our office show normal sinus rhythm with frequent APCs versus wandering atrial pacemaker. She does have a history of significant hypertension in the past. She has had dizzy spells and denies clearly any true syncope. She has had bradycardia in the past from beta-blocker therapy."""
]
###Output
_____no_output_____
###Markdown
4. Run the pipeline
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
###Output
_____no_output_____
###Markdown
5. Visualize
###Code
from sparknlp_display import RelationExtractionVisualizer
vis = RelationExtractionVisualizer()
vis.display(light_result[0], 'clinical_relations', show_relations=True) # default show_relations: True
###Output
/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/sparknlp_display/relation_extraction.py:366: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
relation_coordinates = np.array(relation_coordinates)
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/RE_CLINICAL.ipynb) **Detect causality between symptoms and treatment** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
_____no_output_____
###Markdown
Import dependencies into Python
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
###Output
_____no_output_____
###Markdown
Start the Spark session
###Code
spark = sparknlp_jsl.start(secret)
###Output
_____no_output_____
###Markdown
2. Select the Relation Extraction model and construct the pipeline Select the models:* Clinical Relation Extraction models: **re_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# Change this to the model you want to use and re-run the cells below.
RE_MODEL_NAME = "re_clinical"
NER_MODEL_NAME = "ner_clinical"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
clinical_ner_model = NerDLModel().pretrained(NER_MODEL_NAME, 'en', 'clinical/models').setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("clinical_ner_tags")
clinical_ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "clinical_ner_tags"])\
.setOutputCol("clinical_ner_chunks")
clinical_re_Model = RelationExtractionModel()\
.pretrained(RE_MODEL_NAME, 'en', 'clinical/models')\
.setInputCols(["embeddings", "pos_tags", "clinical_ner_chunks", "dependencies"])\
.setOutputCol("clinical_relations")\
.setMaxSyntacticDistance(4)
#.setRelationPairs()#["problem-test", "problem-treatment"]) # we can set the possible relation pairs (if not set, all the relations will be calculated)
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
pos_tagger,
dependency_parser,
embeddings,
clinical_ner_model,
clinical_ner_chunker,
clinical_re_Model])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
_____no_output_____
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""She is followed by Dr. X in our office and has a history of severe tricuspid regurgitation with mild elevation and PA pressure. On 05/12/08, preserved left and right ventricular systolic function, aortic sclerosis with apparent mild aortic stenosis, and bi-atrial enlargement. She has previously had a Persantine Myoview nuclear rest-stress test scan completed at ABCD Medical Center in 07/06 that was negative. She has had significant mitral valve regurgitation in the past being moderate, but on the most recent echocardiogram on 05/12/08, that was not felt to be significant. She has a history of hypertension and EKGs in our office show normal sinus rhythm with frequent APCs versus wandering atrial pacemaker. She does have a history of significant hypertension in the past. She has had dizzy spells and denies clearly any true syncope. She has had bradycardia in the past from beta-blocker therapy."""
]
###Output
_____no_output_____
###Markdown
4. Run the pipeline
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
###Output
_____no_output_____
###Markdown
5. Visualize helper function for visualization
###Code
def get_relations_df (results, rel='clinical_relations'):
rel_pairs=[]
for rel in results[rel]:
rel_pairs.append((
rel.result,
rel.metadata['entity1'],
rel.metadata['entity1_begin'],
rel.metadata['entity1_end'],
rel.metadata['chunk1'],
rel.metadata['entity2'],
rel.metadata['entity2_begin'],
rel.metadata['entity2_end'],
rel.metadata['chunk2'],
rel.metadata['confidence']
))
rel_df = pd.DataFrame(rel_pairs, columns=['relation','entity1','entity1_begin','entity1_end','chunk1','entity2','entity2_begin','entity2_end','chunk2', 'confidence'])
return rel_df[rel_df.relation!='O']
get_relations_df(light_result[0])
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/RE_CLINICAL.ipynb) **Detect causality between symptoms and treatment** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
SparkNLP Version: 2.6.0
SparkNLP-JSL Version: 2.6.0
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
openjdk version "11.0.8" 2020-07-14
OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1)
OpenJDK 64-Bit Server VM (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing)
[K |████████████████████████████████| 215.7MB 66kB/s
[K |████████████████████████████████| 204kB 19.7MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
Collecting spark-nlp==2.6.0
[?25l Downloading https://files.pythonhosted.org/packages/e4/30/1bd0abcc97caed518efe527b9146897255dffcf71c4708586a82ea9eb29a/spark_nlp-2.6.0-py2.py3-none-any.whl (125kB)
[K |████████████████████████████████| 133kB 3.2MB/s
[?25hInstalling collected packages: spark-nlp
Successfully installed spark-nlp-2.6.0
Looking in indexes: https://pypi.org/simple, https://pypi.johnsnowlabs.com/2.6.0-8388813d58b67fa25bf9cf603393363af96dba16
Collecting spark-nlp-jsl==2.6.0
Requirement already satisfied, skipping upgrade: spark-nlp==2.6.0 in /usr/local/lib/python3.6/dist-packages (from spark-nlp-jsl==2.6.0) (2.6.0)
Installing collected packages: spark-nlp-jsl
Successfully installed spark-nlp-jsl-2.6.0
###Markdown
Import dependencies into Python
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
###Output
_____no_output_____
###Markdown
Start the Spark session
###Code
spark = sparknlp_jsl.start(secret)
###Output
_____no_output_____
###Markdown
2. Select the Relation Extraction model and construct the pipeline Select the models:* Clinical Relation Extraction models: **re_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# Change this to the model you want to use and re-run the cells below.
RE_MODEL_NAME = "re_clinical"
NER_MODEL_NAME = "ner_clinical"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
clinical_ner_model = NerDLModel().pretrained(NER_MODEL_NAME, 'en', 'clinical/models').setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("clinical_ner_tags")
clinical_ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "clinical_ner_tags"])\
.setOutputCol("clinical_ner_chunks")
clinical_re_Model = RelationExtractionModel()\
.pretrained(RE_MODEL_NAME, 'en', 'clinical/models')\
.setInputCols(["embeddings", "pos_tags", "clinical_ner_chunks", "dependencies"])\
.setOutputCol("clinical_relations")\
.setMaxSyntacticDistance(4)
#.setRelationPairs()#["problem-test", "problem-treatment"]) # we can set the possible relation pairs (if not set, all the relations will be calculated)
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
pos_tagger,
dependency_parser,
embeddings,
clinical_ner_model,
clinical_ner_chunker,
clinical_re_Model])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
pos_clinical download started this may take some time.
Approximate size to download 1.7 MB
[OK!]
dependency_conllu download started this may take some time.
Approximate size to download 16.6 MB
[OK!]
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical download started this may take some time.
Approximate size to download 13.8 MB
[OK!]
re_clinical download started this may take some time.
Approximate size to download 6 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""She is followed by Dr. X in our office and has a history of severe tricuspid regurgitation with mild elevation and PA pressure. On 05/12/08, preserved left and right ventricular systolic function, aortic sclerosis with apparent mild aortic stenosis, and bi-atrial enlargement. She has previously had a Persantine Myoview nuclear rest-stress test scan completed at ABCD Medical Center in 07/06 that was negative. She has had significant mitral valve regurgitation in the past being moderate, but on the most recent echocardiogram on 05/12/08, that was not felt to be significant. She has a history of hypertension and EKGs in our office show normal sinus rhythm with frequent APCs versus wandering atrial pacemaker. She does have a history of significant hypertension in the past. She has had dizzy spells and denies clearly any true syncope. She has had bradycardia in the past from beta-blocker therapy."""
]
###Output
_____no_output_____
###Markdown
4. Run the pipeline
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
###Output
_____no_output_____
###Markdown
5. Visualize helper function for visualization
###Code
def get_relations_df (results, rel='clinical_relations'):
rel_pairs=[]
for rel in results[rel]:
rel_pairs.append((
rel.result,
rel.metadata['entity1'],
rel.metadata['entity1_begin'],
rel.metadata['entity1_end'],
rel.metadata['chunk1'],
rel.metadata['entity2'],
rel.metadata['entity2_begin'],
rel.metadata['entity2_end'],
rel.metadata['chunk2'],
rel.metadata['confidence']
))
rel_df = pd.DataFrame(rel_pairs, columns=['relation','entity1','entity1_begin','entity1_end','chunk1','entity2','entity2_begin','entity2_end','chunk2', 'confidence'])
return rel_df[rel_df.relation!='O']
get_relations_df(light_result[0])
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/RE_CLINICAL.ipynb) **Detect causality between symptoms and treatment** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
SparkNLP Version: 2.6.0
SparkNLP-JSL Version: 2.6.0
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
openjdk version "11.0.8" 2020-07-14
OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1)
OpenJDK 64-Bit Server VM (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing)
[K |████████████████████████████████| 215.7MB 66kB/s
[K |████████████████████████████████| 204kB 19.7MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
Collecting spark-nlp==2.6.0
[?25l Downloading https://files.pythonhosted.org/packages/e4/30/1bd0abcc97caed518efe527b9146897255dffcf71c4708586a82ea9eb29a/spark_nlp-2.6.0-py2.py3-none-any.whl (125kB)
[K |████████████████████████████████| 133kB 3.2MB/s
[?25hInstalling collected packages: spark-nlp
Successfully installed spark-nlp-2.6.0
Looking in indexes: https://pypi.org/simple, https://pypi.johnsnowlabs.com/2.6.0-8388813d58b67fa25bf9cf603393363af96dba16
Collecting spark-nlp-jsl==2.6.0
Downloading https://pypi.johnsnowlabs.com/2.6.0-8388813d58b67fa25bf9cf603393363af96dba16/spark-nlp-jsl/spark_nlp_jsl-2.6.0-py3-none-any.whl
Requirement already satisfied, skipping upgrade: spark-nlp==2.6.0 in /usr/local/lib/python3.6/dist-packages (from spark-nlp-jsl==2.6.0) (2.6.0)
Installing collected packages: spark-nlp-jsl
Successfully installed spark-nlp-jsl-2.6.0
###Markdown
Import dependencies into Python
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
###Output
_____no_output_____
###Markdown
Start the Spark session
###Code
spark = sparknlp_jsl.start(secret)
###Output
_____no_output_____
###Markdown
2. Select the Relation Extraction model and construct the pipeline Select the models:* Clinical Relation Extraction models: **re_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# Change this to the model you want to use and re-run the cells below.
RE_MODEL_NAME = "re_clinical"
NER_MODEL_NAME = "ner_clinical"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
clinical_ner_model = NerDLModel().pretrained(NER_MODEL_NAME, 'en', 'clinical/models').setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("clinical_ner_tags")
clinical_ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "clinical_ner_tags"])\
.setOutputCol("clinical_ner_chunks")
clinical_re_Model = RelationExtractionModel()\
.pretrained(RE_MODEL_NAME, 'en', 'clinical/models')\
.setInputCols(["embeddings", "pos_tags", "clinical_ner_chunks", "dependencies"])\
.setOutputCol("clinical_relations")\
.setMaxSyntacticDistance(4)
#.setRelationPairs()#["problem-test", "problem-treatment"]) # we can set the possible relation pairs (if not set, all the relations will be calculated)
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
pos_tagger,
dependency_parser,
embeddings,
clinical_ner_model,
clinical_ner_chunker,
clinical_re_Model])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
pos_clinical download started this may take some time.
Approximate size to download 1.7 MB
[OK!]
dependency_conllu download started this may take some time.
Approximate size to download 16.6 MB
[OK!]
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical download started this may take some time.
Approximate size to download 13.8 MB
[OK!]
re_clinical download started this may take some time.
Approximate size to download 6 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""She is followed by Dr. X in our office and has a history of severe tricuspid regurgitation with mild elevation and PA pressure. On 05/12/08, preserved left and right ventricular systolic function, aortic sclerosis with apparent mild aortic stenosis, and bi-atrial enlargement. She has previously had a Persantine Myoview nuclear rest-stress test scan completed at ABCD Medical Center in 07/06 that was negative. She has had significant mitral valve regurgitation in the past being moderate, but on the most recent echocardiogram on 05/12/08, that was not felt to be significant. She has a history of hypertension and EKGs in our office show normal sinus rhythm with frequent APCs versus wandering atrial pacemaker. She does have a history of significant hypertension in the past. She has had dizzy spells and denies clearly any true syncope. She has had bradycardia in the past from beta-blocker therapy."""
]
###Output
_____no_output_____
###Markdown
4. Run the pipeline
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
###Output
_____no_output_____
###Markdown
5. Visualize helper function for visualization
###Code
def get_relations_df (results, rel='clinical_relations'):
rel_pairs=[]
for rel in results[rel]:
rel_pairs.append((
rel.result,
rel.metadata['entity1'],
rel.metadata['entity1_begin'],
rel.metadata['entity1_end'],
rel.metadata['chunk1'],
rel.metadata['entity2'],
rel.metadata['entity2_begin'],
rel.metadata['entity2_end'],
rel.metadata['chunk2'],
rel.metadata['confidence']
))
rel_df = pd.DataFrame(rel_pairs, columns=['relation','entity1','entity1_begin','entity1_end','chunk1','entity2','entity2_begin','entity2_end','chunk2', 'confidence'])
return rel_df[rel_df.relation!='O']
get_relations_df(light_result[0])
###Output
_____no_output_____ |
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial1.ipynb | ###Markdown
Tutorial 1: Framing the Question**Week 1, Day 2: Modeling Practice****By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on! DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy!
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own question *Please discuss the following for about 25 min*You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g. comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary modeling approaches?* What skill sets are required? * Do I need learn something before I can start? * Ensure that no important aspect is missed* Potentially provides specific data sets / alternative modeling approaches for comparison **Do this AFTER the tutorial** ---- Step 3: Determining the basic ingredients
###Code
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients *Please discuss the following for about 25 min*This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:1. What parameters / variables are needed?] * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements!This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsI’m experienced, I don’t need to think about ingredients anymore Or so you think… I can’t think of any ingredients Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure? I have all inputs and outputs Good! But what will link them? Thinking about that will start shaping your model and hypotheses I can’t think of any links (= mechanisms) You will acquire a library of potential mechanisms as you keep modeling and learning But the literature will often give you hints through hypotheses If you still can't think of links, then maybe you're missing ingredients? ---- Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where **E** stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial 1: Framing the Question**Week 1, Day 2: Modeling Practice****By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on! DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy!
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own question *Please discuss the following for about 25 min*You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g. comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary modeling approaches?* What skill sets are required? * Do I need learn something before I can start? * Ensure that no important aspect is missed* Potentially provides specific data sets / alternative modeling approaches for comparison **Do this AFTER the tutorial** ---- Step 3: Determining the basic ingredients
###Code
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients *Please discuss the following for about 25 min*This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:1. What parameters / variables are needed?] * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements!This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsI’m experienced, I don’t need to think about ingredients anymore Or so you think… I can’t think of any ingredients Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure? I have all inputs and outputs Good! But what will link them? Thinking about that will start shaping your model and hypotheses I can’t think of any links (= mechanisms) You will acquire a library of potential mechanisms as you keep modeling and learning But the literature will often give you hints through hypotheses If you still can't think of links, then maybe you're missing ingredients? ---- Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where **E** stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 1 Modeling Practice: Framing the question__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by building a simple model.We will investigate a simple phenomenon, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsTutorial 1 (this notebook) will cover the steps 1-5, while Tutorial 2 will cover the steps 6-10.**TD**: All activities you should perform are labeled with **TD.**, which stands for "To Do", micro-tutorial number, activity number. They can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on!**Run**: Some code chunks' names start with "Run to ... (do something)". These chunks are purely to produce a graph or calculate a number. You do not need to look at or understand the code in those chunks. Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window,
FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition {condition:d} judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition {condition:d} prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis',\
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row',
figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd: red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion
# classified correctly: (1 - pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
idx = np.argmax(prop_correct[::-1]) + 1
plt.plot([thresholds[-idx]]*2, [0, 1], '--', color='xkcd:purple',
label='best classification')
plt.text(0.7, 0.8,
f"threshold:{thresholds[-idx]:0.2f}\
\ncorrect: {prop_correct[-idx]:0.2f}")
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="W1D2_data.npz"
if not os.path.exists(fname):
!wget https://osf.io/c5xyf/download -O $fname
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 1: Investigating the phenomenon
###Code
# @title Video 1: Question
video = YouTubeVideo(id='x4b2-hZoyiY', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal**: formulate a good question!**Background: The train illusion**In the video you have learnt about the train illusion. In the same situation, we sometimes perceive our own train to be moving and sometimes the other train. How come our perception is ambiguous?We will build a simple model with the goal _to learn about the process of model building_ (i.e.: not to explain train illusions or get a correct model). To keep this manageable, we use a _simulated_ data set. For the same reason, this tutorial contains both coding and thinking activities. Doing both are essential for success.Imagine we get data from an experimentalist who collected _judgments_ on self motion and world motion, in two conditions. One where there was only world motion, and one where there was only self motion. In either case, the velocity increased from 0 to 1 m/s across 10 seconds with the same (fairly low) acceleration. Each of these conditions was recorded 100 times:![illustration of the conditions](https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/NMA-W1D2-fig01.png)Participants sit very still during the trials and at the end of each 10 s trial they are given two sliders, one to indicate the self-motion velocity (in m/s) and another to indicate the world-motion velocity (in m/s) _at the end of the interval_. TD 1.1: Form expectations about the experiment, using the phenomenaIn the experiment we get the participants _judgments_ of the velocities they experienced. In the Python chunk below, you should retain the numbers that represent your expectations on the participants' judgments. Remember that in the train illusion people usually experience either self motion or world motion, but not both. From the lists, remove those pairs of responses you think are unlikely to be the participants' judgments. The first two pairs of coordinates (1 m/s, 0 m/s, and 0 m/s, 1 m/s) are the stimuli, so those reflect judgments without illusion. Those should stay, but how do you think participants judge velocities when they _do_ experience the illusion?**Create Expectations**
###Code
# Create Expectations
###################################################################
# To complete the exercise, remove unlikely responses from the two
# lists. The lists act as X and Y coordinates for a scatter plot,
# so make sure the lists match in length.
###################################################################
world_vel_exp = [1, 0, 1, 0.5, 0.5, 0]
self_vel_exp = [0, 1, 1, 0.5, 0, 0.5]
# The code below creates a figure with your predictions:
my_plot_percepts(datasets={'expectations': {'world': world_vel_exp,
'self': self_vel_exp}})
###Output
_____no_output_____
###Markdown
**TD 1.2**: Compare Expectations to DataThe behavioral data from our experiment is in a 200 x 5 matrix called `judgments`, where each row indicates a trial.The first three columns in the `judgments` matrix represent the conditions in the experiment, and the last two columns list the velocity judgments.![illustration of the judgments matrix](https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/NMA-W1D2-fig02.png)The condition number can be 0 (world-motion condition, first 100 rows) or 1 (self-motion condition, last 100 rows). Columns 1 and 2 respectively list the true self- and world-motion velocities in the experiment. You will not have to use the first three columns.The motion judgements (columns 3 and 4) are the participants judgments of the self-motion velocity and world-motion velocity respectively, and should show the illusion. Let's plot the judgment data, along with the true motion of the stimuli in the experiment:
###Code
#@title
#@markdown Run to plot perceptual judgments
my_plot_percepts(datasets={'judgments': judgments}, plotconditions=True)
###Output
_____no_output_____
###Markdown
TD 1.3: Think about what the data is saying, by answering these questions:* How does it differ from your initial expectations? * Where are the clusters of data, roughly?* What does it mean that the some of the judgments from the world-motion condition are close to the self-motion stimulus and vice versa?* Why are there no data points in the middle?* What aspects of the data require explanation? --- Section 2: Understanding background
###Code
# @title Video 2: Background
video = YouTubeVideo(id='DcJ91h5Ekis', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Now that we have an interesting phenomenon, we gather background information which will refine our questions, and we lay the groundwork for developing scientific hypotheses. **Background: Motion Sensing**: Our self-motion percepts are based on our visual (optic flow) and vestibular (inner ear) sensing. Optic flow is the moving image on the retina caused by either self or world-motion. Vestibular signals are related to bodily self- movements only. The two signals can be understood as velocity in $m/s$ (optic flow) and acceleration in $m/s^2$ (vestibular signal). We'll first look at the ground truth which is stimulating the senses in our experiment.
###Code
#@markdown **Run to plot motion stimuli**
my_plot_motion_signals()
###Output
_____no_output_____
###Markdown
TD 2.1: Examine the differences between the conditions:* how are the visual inputs (optic flow) different between the conditions?* how are the vestibular signals different between the conditions?* how might the brain use these signals to determine there is self motion?* how might the brain use these signals to determine there is world motion?We can see that, in theory, we have enough information to disambiguate self-motion from world-motion using these signals. Let's go over the logic together. The visual signal is ambiguous, it will be non-zero when there is either self-motion or world-motion. The vestibular signal is specific, it’s only non-zero when there is self-motion. Combining these two signals should allow us to disambiguate the self-motion condition from the world-motion condition!* In the world-motion condition: The brain can simply compare the visual and vestibular signals. If there is visual motion AND NO vestibular motion, it must be that the world is moving but not the body/self = world-motion judgement.* In the self-motion condition: We can make a similar comparison. If there is both visual signals AND vestibular signals, it must be that the body/self is moving = self-motion judgement. **Background: Integrating signals**: To understand how the vestibular _acceleration_ signal could underlie the perception of self-motion _velocity_, we assume the brain integrates the signal. This also allows comparing the vestibular signal to the visual signal, by getting them in the same units. Read more about integration on [Wikipedia](https://en.wikipedia.org/wiki/Integral).Below we will approximate the integral using `np.cumsum()`. The discrete integral would be:$$v_t = \sum_{k=0}^t a_k\cdot\Delta t + v_0$$* $a(t)$ is acceleration as a function of time* $v(t)$ is velocity as a function of time* $\Delta t$ is equal to the sample interval of our recorded visual and vestibular signals (0.1 s).* $v_0$ is the _constant of integration_ which corresponds in the initial velocity at time $0$ (it would have to be known or remembered). Since that is always 0 in our experiment, we will leave it out from here on. Numerically Integrating a signalBelow is a chunk of code which uses the `np.cumsum()` function to integrate the acceleration that was used in our (simulated) experiment: `a` over `dt` in order to get a velocity signal `v`.
###Code
# Check out the code:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
# This does the integration of acceleration into velocity:
v = np.cumsum(a * dt)
my_plot_stimuli(t, a, v)
###Output
_____no_output_____
###Markdown
**Background: Sensory signals are noisy** In our experiment, we also recorded sensory signals in the participant. The data come in two 200 x 100 matrices:`opticflow` (with the visual signals)and`vestibular` (with the vestibular signals)In each of the signal matrices _rows_ (200) represent **trials**, in the same order as in the `judgments` matrix. _Columns_ (100) are **time samples**, representing 10 s collected with a 100 ms time bin. ![illustration of the signal matrices](https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/NMA-W1D2-fig03.png)Here we plot the data representing our 'sensory signals':* plot optic flow signals for self-motion vs world-motion conditions (should be the same)* plot vestibular signals for self-motion vs world-motion conditions (should be different)The x-axis is time in seconds, but the y-axis can be one of several, depending on what you do with the signals: $m/s^2$ (acceleration) or $m/s$ (velocity).
###Code
#@markdown **Run to plot raw noisy sensory signals**
# signals as they are:
my_plot_sensorysignals(judgments, opticflow, vestibular,
integrateVestibular=False)
###Output
_____no_output_____
###Markdown
TD 2.2: Understanding the problem of noisy sensory information**Answer the following questions:** * Is this what you expected? * In which of the two signals should we be able to see a difference between the conditions? * Can we use the data as it is to differentiate between the conditions? * Can we compare the the visual and vestibular motion signals when they're in different units? * What would the brain do differentiate the two conditions? Now that we know how to integrate the vestibular signal to get it into the same unit as the optic flow, we can see if it shows the pattern it should: a flat line in the world-motion condition and the correct velocity profile in the self-motion condition. Run the chunk of Python below to plot the sensory data again, but now with the vestibular signal integrated.
###Code
#@markdown **Run to compare true signals to sensory data**
my_plot_sensorysignals(judgments, opticflow, vestibular,
integrateVestibular=True, returnaxes=False,
addaverages=False, addGroundTruth=True)
###Output
_____no_output_____
###Markdown
The thick lines are the ground truth: the actual velocities in each of the conditions. With some effort, we can make out that _on average_ the vestibular signal does show the expected pattern after all. But there is also a lot of noise in the data. **Background Summary**: Now that we have examined the sensory signals, and understand how they relate to the ground truth. We see that there is enough information to _in principle_ disambiguate true self-motion from true world motion (there should be no illusion!). However, because the sensory information contains a lot of noise, i.e. it is unreliable, it could result in ambiguity.**_It is time to refine our research question:_*** Does the self-motion illusion occur due to unreliable sensory information? --- Section 3: Identifying ingredients
###Code
# @title Video 3: Ingredients
video = YouTubeVideo(id='ZQRtysK4OCo', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
TD 3.1: Understand the moving average function**Goal**: think about what ingredients we will need for our modelWe have access to sensory signals from the visual and vestibular systems that are used to estimate world motion and self motion.However, there are still two issues:1. _While sensory input can be noisy or unstable, perception is much more stable._2. _In the judgments there is either self motion or not._We will solve this by by using:1. _a moving average filter_ to stabilize our sensory signals2. _a threshold function_ to distinguish moving from non-movingOne of the simplest models of noise reduction is a moving average (sometimes: moving mean or rolling mean) over the recent past. In a discrete signal we specify the number of samples to use for the average (including the current one), and this is often called the _window size_. For more information on the moving average, check [this Wikipedia page](https://en.wikipedia.org/wiki/Moving_average).In this tutorial there is a simple running average function available:`my_moving_window(s, w)`: takes a signal time series $s$ and a window size $w$ as input and returns the moving average for all samples in the signal. Interactive Demo: Averaging windowThe code below picks one vestibular signal, integrates it to get a velocity estimate for self motion, and then filters. You can set the window size.Try different window sizes, then answer the following:* What is the maximum window size? The minimum?* Why does increasing the window size shift the curves? * How do the filtered estimates differ from the true signal?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
t = np.arange(0, 10, .1)
def refresh(trial_number=101, window=15):
# get the trial signal:
signal = vestibular[trial_number - 1, :]
# integrate it:
signal = np.cumsum(signal * .1)
# plot this signal
plt.plot(t, signal, label='integrated vestibular signal')
# filter:
signal = my_moving_window(signal, window=window, FUN=np.mean)
# plot filtered signal
plt.plot(t, signal, label=f'filtered with window: {window}')
plt.legend()
plt.show()
_ = widgets.interact(refresh, trial_number=(1, 200, 1), window=(5, 100, 1))
###Output
_____no_output_____
###Markdown
_Note: the function `my_moving_window()` is defined in this notebook in the code block at at the top called "Helper functions". It should be the first function there, so feel free to check how it works._ TD 3.2: Thresholding the self-motion vestibular signalComparing the integrated, filtered (accumulated) vestibular signals with a threshold should allow determining if there is self motion or not.To try this, we:1. Integrate the vestibular signal, apply a moving average filter, and take the last value of each trial's vestibular signal as an estimate of self-motion velocity. 2. Transfer the estimates of self-motion velocity into binary (0,1) decisions by comparing them to a threshold. Remember the output of logical comparators (>=<) are logical (truth/1, false/0). 1 indicates we think there was self-motion and 0 indicates otherwise. YOUR CODE HERE.3. We sort these decisions separately for conditions of real world-motion vs. real self-motion to determine 'classification' accuracy.4. To understand how the threshold impacts classfication accuracy, we do 1-3 for a range of thresholds.There is one line fo code to complete, which will implement step 2. Exercise 1: Threshold self-motion velocity into binary classifiction of self-motion
###Code
def my_threshold(selfmotion_vel_est, threshold):
"""
This function should calculate proportion self motion
for both conditions and the overall proportion
correct classifications.
Args:
selfmotion_vel_est (numpy.ndarray): A sequence of floats
indicating the estimated self motion for all trials.
threshold (float): A threshold for the estimate of self motion when
the brain decides there really is self motion.
Returns:
(numpy.ndarray): self-motion: yes or no.
"""
##############################################################
# Compare the self motion estimates to the threshold:
# Replace '...' with the proper code:
# Remove the next line to test your function
raise NotImplementedError("Modify my_threshold function")
##############################################################
# Compare the self motion estimates to the threshold
is_move = ...
return is_move
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial1_Solution_d278e3f8.py) Interactive Demo: Threshold vs. averaging windowNow we combine the classification steps 1-3 above, for a variable threshold. This will allow us to find the threshold that produces the most accurate classification of self-motion.We also add a 'widget' that controls the size of the moving average window. How does the optimal threshold vary with window size?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
thresholds = np.round_(np.arange(0, 1.01, .01), 2)
v_ves = np.cumsum(vestibular * .1, axis=1)
def refresh(window=50):
selfmotion_vel_est = my_moving_window(v_ves, window=window,
FUN=np.mean)[:, 99]
[pselfmove_nomove,
pselfmove_move,
pcorrect] = my_moving_threshold(selfmotion_vel_est, thresholds)
my_plot_thresholds(thresholds, pselfmove_nomove, pselfmove_move, pcorrect)
_ = widgets.interact(refresh, window=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Let's unpack this: Ideally, in the self-motion condition (blue line) we should always detect self motion, and in never in the world-motion condition (red line). This doesn't happen, regardless of the settings we pick. However, we can pick a threshold that gives the highest proportion correctly classified trials, which depends on the window size, but is between 0.2 and 0.4. We'll pick the optimal threshold for a window size of 100 (the full signal) which is at 0.33. The ingredients we have collected for our model so far:* integration: get the vestibular signal in the same unit as the visual signal* running average: accumulate evidence over some time, so that perception is stable* decision if there was self motion (threshold) Since the velocity judgments are made at the end of the 10 second trials, it seems reasonable to use the sensory signals at the last sample to estimate what percept the participants said they had. --- Section 4: Formulating hypotheses
###Code
# @title Video 4: Hypotheses
video = YouTubeVideo(id='wgOpbfUELqU', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal**: formulate reasonable hypotheses in mathematical language using the ingredients identified in step 3. Write hypotheses as a function that we evaluate the model against later.**Question:** _Why do we experience the illusion?_We know that there are two real motion signals, and that these drive two sensory signals:> $w_v$: world motion (velocity magnitude)> > $s_v$: self motion (velocity magnitude)> > $s_{visual}$: optic flow signal> > $s_{vestibular}$: vestibular signalOptic flow is ambiguous, as both world motion and self motion drive visual motion.$$s_{visual} = w_v - s_v + noise$$Notice that world motion and self motion might cancel out. For example, if the train you are on, and the train you are looking at, both move at exactly the same speed.Vestibular signals are driven only by self motion, but _can_ be ambiguous when they are noisy. $$s_{vestibular} = s_v + noise$$**Combining Relationships**Without the sensory noise, these two relations are two linear equations, with two unknowns!This suggests the brain could simply "solve" for $s_v$ and $w_v$. However, given the noisy signals, sometimes these solutions will not be correct. Perhaps that is enough to explain the illusion? TD 4.1: Write out HypothesisUse the discussion and framing to write out your hypothesis in the form:> Illusory self-motion occurs when (give preconditions). We hypothesize it occurs because (explain how our hypothesized relationships work) TD 4.2: Relate hypothesis to ingredientsNow it's time to pull together the ingredients and relate them to our hypothesis. **For each trial we have:**| variable | description || ---- | ---- || $\hat{v_s}$ | **self motion judgment** (in m/s)|| $\hat{v_w}$ | **world motion judgment** (in m/s)|| $s_{ves}$ | **vestibular info** filtered and integrated vestibular information || $s_{opt}$ | **optic flow info** filtered optic flow estibular information || $z_s$ | **Self-motion detection** boolean value (True/False) indicating whether the vestibular info was above threshold or not |Answer the following questions by replotting your data and ingredients: * which of the 5 variables do our hypothesis say should be related?* what do you expect these plots to look like?
###Code
# Run to calculate variables
# these 5 lines calculate the main variables that we might use in the model
s_ves = my_moving_window(np.cumsum(vestibular * .1, axis=1), window=100)[:, 99]
s_opt = my_moving_window(opticflow, window=50)[:, 99]
v_s = s_ves
v_w = -s_opt - v_s
z_s = (s_ves > 0.33)
###Output
_____no_output_____
###Markdown
In the first chunk of code we plot histograms to compare the variability of the estimates of velocity we get from each of two sensory signals.**Plot histograms**
###Code
# Plot histograms
plt.figure(figsize=(8, 6))
plt.hist(s_ves, label='vestibular', alpha=0.5) # set the first argument here
plt.hist(s_opt, label='visual', alpha=0.5) # set the first argument here
plt.ylabel('frequency')
plt.xlabel('velocity estimate')
plt.legend(facecolor='xkcd:white')
plt.show()
###Output
_____no_output_____
###Markdown
This matches that the vestibular signals are noisier than visual signals.Below is generic code to create scatter diagrams. Use it to see if the relationships between variables are the way you expect them. For example, what is the relationship between the estimates of self motion and world motion, as we calculate them here? Exercise 2: Build a scatter plot
###Code
# this sets up a figure with some dotted lines on y=0 and x=0 for reference
plt.figure(figsize=(8, 8))
plt.plot([0, 0], [-0.5, 1.5], ':', color='xkcd:black')
plt.plot([-0.5, 1.5], [0, 0], ':', color='xkcd:black')
#############################################################################
# uncomment below and fill in with your code
#############################################################################
# determine which variables you want to look at (variable on the abscissa / x-axis, variable on the ordinate / y-axis)
# plt.scatter(...)
plt.xlabel('world-motion velocity [m/s]')
plt.ylabel('self-motion velocity [m/s]')
plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial1_Solution_f533b89d.py)*Example output:* Below is code that uses $z_s$ to split the trials in into two categories (i.e., $s_{ves}$ below or above threshold) and plot the mean in each category. Exercise 3: Split variable means bar graph
###Code
###################################
# Fill in source_var and uncomment
####################################
# source variable you want to check
source_var = ...
# below = np.mean(source_var[np.where(np.invert(z_s))[0]])
# above = np.mean(source_var[np.where(z_s)[0]] )
# plt.bar(x=[0, 1], height=[below, above])
plt.xticks([0, 1], ['below', 'above'])
plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial1_Solution_e66e09ba.py)*Example output:* --- Section 5: Toolkit selection
###Code
# @title Video 5: Toolkit
video = YouTubeVideo(id='rsmnayVfJyM', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
Tutorial 1: Framing the Question**Week 1, Day 2: Modeling Practice****By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on! DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy!
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own question *Please discuss the following for about 25 min*You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g. comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary modeling approaches?* What skill sets are required? * Do I need learn something before I can start? * Ensure that no important aspect is missed* Potentially provides specific data sets / alternative modeling approaches for comparison **Do this AFTER the tutorial** ---- Step 3: Determining the basic ingredients
###Code
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mq4y1x77s", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients *Please discuss the following for about 25 min*This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:1. What parameters / variables are needed?] * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements!This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsI’m experienced, I don’t need to think about ingredients anymore Or so you think… I can’t think of any ingredients Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure? I have all inputs and outputs Good! But what will link them? Thinking about that will start shaping your model and hypotheses I can’t think of any links (= mechanisms) You will acquire a library of potential mechanisms as you keep modeling and learning But the literature will often give you hints through hypotheses If you still can't think of links, then maybe you're missing ingredients? ---- Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1fh411h7aX", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where **E** stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial 1: Framing the Question**Week 1, Day 2: Modeling Practice****By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on! DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy!
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own question *Please discuss the following for about 25 min*You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g. comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary modeling approaches?* What skill sets are required? * Do I need learn something before I can start? * Ensure that no important aspect is missed* Potentially provides specific data sets / alternative modeling approaches for comparison **Do this AFTER the tutorial** ---- Step 3: Determining the basic ingredients
###Code
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mq4y1x77s", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients *Please discuss the following for about 25 min*This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:1. What parameters / variables are needed?] * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements!This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsI’m experienced, I don’t need to think about ingredients anymore Or so you think… I can’t think of any ingredients Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure? I have all inputs and outputs Good! But what will link them? Thinking about that will start shaping your model and hypotheses I can’t think of any links (= mechanisms) You will acquire a library of potential mechanisms as you keep modeling and learning But the literature will often give you hints through hypotheses If you still can't think of links, then maybe you're missing ingredients? ---- Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1fh411h7aX", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where **E** stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial: Framing the Question**Week 1, Day 2: Modeling Practice****By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you should move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects.**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on!Enjoy! ----Step 1: Finding a phenomenon and a question to ask about it
###Code
#@title Video 1: Asking a question
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Prf_Tc9UNp0", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own questionYou should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs to be modelled? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define the model evaluation method! * How will you know your model is good? * E.g. comparison to specific data (quantitative method of comparison?)* Think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need?> Avoid these common pitfalls* Question is too general * Remember: science advances one small step at the time. Get the small step right…* Precise aspect of phenomenon you want to model is unclear * You fail to ask a meaningful question* You have already chosen a toolkit * This will prevent you from thinking deeply about the best way to answer your scientific question* You don’t have a clear goal * What do you want to get out the model?* You don’t have a potential experiment in mind * This will help concretize your objectives and think through the logic behind your goal**This should take no more than 30 minutes.** ----Step 2: Understanding the state of the art Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review should take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literatire but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary models?* What skill sets are required? * Do I need learn something before I can start? * Ensures that no important aspect is missed* Provides specific data sets / alternative models for comparison**Do this AFTER the tutorial** ----Step 3: Determining the basic ingredients
###Code
#@title Video 2: Determining basic ingredients
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="hfntEL8TVvY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients.Determine the basic ingredients. This will allow you to think deeper about what your model will need. It's a crucial step before you can formulate hypotheses because you first need to understand what the model will need to contain. There are 2 aspects you want to think about:1. What parameters / variables are needed in the model? * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements! This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. > Make sure to avoid the following pitfalls* I’m experienced, I don’t need to think about ingredients anymore * Or so you think…* I can’t think of any ingredients * Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure?* I have all inputs and outputs * Good! But what will link them? Thinking about that will start shaping your model and hypotheses* I can’t think of any links (= mechanisms) * You will acquire a library of potential mechanisms as you keep modeling and learning * But the literature will often give you hints through hypotheses * If you still can't think of links, then maybe you're missing ingredients?The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step.**This should take approximately 30 min.** ----Step 4: Formulating specific, mathematically defined hypotheses
###Code
#@title Video 3: Formulating a hypothesis
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Vc3g1XajLlc", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Think! 4: Formulating your hypothesisOnce you have your question and goal lines up, you have done a literature review (let's assume for now) and you have thought about ingredients needed for your model, you're now ready to start thinking about *specific* hypotheses.Formulating hypotheses really consists of two consecutive steps:1. You think about the hypotheses in words by relating ingredients identified in Step 3 * What is the model mechanism expected to do? * How are different parameters expected to influence model results?2. You then express these hypotheses in mathematical language by giving the ingredients identified in Step 3 specific variable names. * Be explicit, e.g. y(t)=f(x(t),k) but z(t) doesn’t influence yThere are also "structural hypotheses" that make assumptions on what model components you hypothesize will be crucial to capture the phenomenon at hand. **Important**: Formulating the hypotheses is the last step before starting to model. This step determines the model approach and ingredients. It provides a more detailed description of the question / goal from Step 1. The more precise the hypotheses, the easier the model will be to justify. > To succeed here, avoid the following pitfalls* I don’t need hypotheses, I will just play around with the model * Hypotheses help determine and specify goals. You can (and should) still play…* My hypotheses don’t match my question (or vice versa) * This is a normal part of the process! * You need to loop back to Step 1 and revisit your question / phenomenon / goals* I can’t write down a math hypothesis * Often that means you lack ingredients and/or clarity on the hypothesis * OR: you have a “structural” hypothesis, i.e. you expect a certain model component to be crucial in explaining the phenomenon / answering the question**This step should take about 30 min**. **We now have everything we need to actually start modelling!**
###Code
#@title Video 4: Next step!
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="bDqwjCYhyAg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial Modeling Practice: Framing the question__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you should move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects.**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on!Enjoy! ----Step 1: Finding a phenomenon and a question to ask about it
###Code
#@title Video 1: Asking a question
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Prf_Tc9UNp0", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own questionYou should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs to be modelled? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define the model evaluation method! * How will you know your model is good? * E.g. comparison to specific data (quantitative method of comparison?)* Think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need?> Avoid these common pitfalls* Question is too general * Remember: science advances one small step at the time. Get the small step right…* Precise aspect of phenomenon you want to model is unclear * You fail to ask a meaningful question* You have already chosen a toolkit * This will prevent you from thinking deeply about the best way to answer your scientific question* You don’t have a clear goal * What do you want to get out the model?* You don’t have a potential experiment in mind * This will help concretize your objectives and think through the logic behind your goal**This should take no more than 30 minutes.** ----Step 2: Understanding the state of the art Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review should take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literatire but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary models?* What skill sets are required? * Do I need learn something before I can start? * Ensures that no important aspect is missed* Provides specific data sets / alternative models for comparison**Do this AFTER the tutorial** ----Step 3: Determining the basic ingredients
###Code
#@title Video 2: Determining basic ingredients
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="hfntEL8TVvY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients.Determine the basic ingredients. This will allow you to think deeper about what your model will need. It's a crucial step before you can formulate hypotheses because you first need to understand what the model will need to contain. There are 2 aspects you want to think about:1. What parameters / variables are needed in the model? * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements! This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. > Make sure to avoid the following pitfalls* I’m experienced, I don’t need to think about ingredients anymore * Or so you think…* I can’t think of any ingredients * Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure?* I have all inputs and outputs * Good! But what will link them? Thinking about that will start shaping your model and hypotheses* I can’t think of any links (= mechanisms) * You will acquire a library of potential mechanisms as you keep modeling and learning * But the literature will often give you hints through hypotheses * If you still can't think of links, then maybe you're missing ingredients?The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step.**This should take approximately 30 min.** ----Step 4: Formulating specific, mathematically defined hypotheses
###Code
#@title Video 3: Formulating a hypothesis
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Vc3g1XajLlc", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Think! 4: Formulating your hypothesisOnce you have your question and goal lines up, you have done a literature review (let's assume for now) and you have thought about ingredients needed for your model, you're now ready to start thinking about *specific* hypotheses.Formulating hypotheses really consists of two consecutive steps:1. You think about the hypotheses in words by relating ingredients identified in Step 3 * What is the model mechanism expected to do? * How are different parameters expected to influence model results?2. You then express these hypotheses in mathematical language by giving the ingredients identified in Step 3 specific variable names. * Be explicit, e.g. y(t)=f(x(t),k) but z(t) doesn’t influence yThere are also "structural hypotheses" that make assumptions on what model components you hypothesize will be crucial to capture the phenomenon at hand. **Important**: Formulating the hypotheses is the last step before starting to model. This step determines the model approach and ingredients. It provides a more detailed description of the question / goal from Step 1. The more precise the hypotheses, the easier the model will be to justify. > To succeed here, avoid the following pitfalls* I don’t need hypotheses, I will just play around with the model * Hypotheses help determine and specify goals. You can (and should) still play…* My hypotheses don’t match my question (or vice versa) * This is a normal part of the process! * You need to loop back to Step 1 and revisit your question / phenomenon / goals* I can’t write down a math hypothesis * Often that means you lack ingredients and/or clarity on the hypothesis * OR: you have a “structural” hypothesis, i.e. you expect a certain model component to be crucial in explaining the phenomenon / answering the question**This step should take about 30 min**. **We now have everything we need to actually start modelling!**
###Code
#@title Video 4: Next step!
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="bDqwjCYhyAg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Tutorial: Framing the Question**Week 1, Day 2: Modeling Practice****By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you should move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects.**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on!Enjoy! ----Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 1: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Prf_Tc9UNp0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own questionYou should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs to be modelled? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define the model evaluation method! * How will you know your model is good? * E.g. comparison to specific data (quantitative method of comparison?)* Think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need?> Avoid these common pitfalls* Question is too general * Remember: science advances one small step at the time. Get the small step right…* Precise aspect of phenomenon you want to model is unclear * You fail to ask a meaningful question* You have already chosen a toolkit * This will prevent you from thinking deeply about the best way to answer your scientific question* You don’t have a clear goal * What do you want to get out the model?* You don’t have a potential experiment in mind * This will help concretize your objectives and think through the logic behind your goal**This should take no more than 30 minutes.** ----Step 2: Understanding the state of the art Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review should take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literatire but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary models?* What skill sets are required? * Do I need learn something before I can start? * Ensures that no important aspect is missed* Provides specific data sets / alternative models for comparison**Do this AFTER the tutorial** ----Step 3: Determining the basic ingredients
###Code
# @title Video 2: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="hfntEL8TVvY", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients.Determine the basic ingredients. This will allow you to think deeper about what your model will need. It's a crucial step before you can formulate hypotheses because you first need to understand what the model will need to contain. There are 2 aspects you want to think about:1. What parameters / variables are needed in the model? * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements! This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. > Make sure to avoid the following pitfalls* I’m experienced, I don’t need to think about ingredients anymore * Or so you think…* I can’t think of any ingredients * Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure?* I have all inputs and outputs * Good! But what will link them? Thinking about that will start shaping your model and hypotheses* I can’t think of any links (= mechanisms) * You will acquire a library of potential mechanisms as you keep modeling and learning * But the literature will often give you hints through hypotheses * If you still can't think of links, then maybe you're missing ingredients?The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step.**This should take approximately 30 min.** ----Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 3: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Vc3g1XajLlc", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Think! 4: Formulating your hypothesisOnce you have your question and goal lines up, you have done a literature review (let's assume for now) and you have thought about ingredients needed for your model, you're now ready to start thinking about *specific* hypotheses.Formulating hypotheses really consists of two consecutive steps:1. You think about the hypotheses in words by relating ingredients identified in Step 3 * What is the model mechanism expected to do? * How are different parameters expected to influence model results?2. You then express these hypotheses in mathematical language by giving the ingredients identified in Step 3 specific variable names. * Be explicit, e.g. y(t)=f(x(t),k) but z(t) doesn’t influence yThere are also "structural hypotheses" that make assumptions on what model components you hypothesize will be crucial to capture the phenomenon at hand. **Important**: Formulating the hypotheses is the last step before starting to model. This step determines the model approach and ingredients. It provides a more detailed description of the question / goal from Step 1. The more precise the hypotheses, the easier the model will be to justify. > To succeed here, avoid the following pitfalls* I don’t need hypotheses, I will just play around with the model * Hypotheses help determine and specify goals. You can (and should) still play…* My hypotheses don’t match my question (or vice versa) * This is a normal part of the process! * You need to loop back to Step 1 and revisit your question / phenomenon / goals* I can’t write down a math hypothesis * Often that means you lack ingredients and/or clarity on the hypothesis * OR: you have a “structural” hypothesis, i.e. you expect a certain model component to be crucial in explaining the phenomenon / answering the question**This step should take about 30 min**. **We now have everything we need to actually start modelling!**
###Code
# @title Video 4: Next step!
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="bDqwjCYhyAg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial: Framing the Question**Week 1, Day 2: Modeling Practice****By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you should move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects.**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on!Enjoy! ----Step 1: Finding a phenomenon and a question to ask about it
###Code
#@title Video 1: Asking a question
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Prf_Tc9UNp0", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own questionYou should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs to be modelled? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define the model evaluation method! * How will you know your model is good? * E.g. comparison to specific data (quantitative method of comparison?)* Think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need?> Avoid these common pitfalls* Question is too general * Remember: science advances one small step at the time. Get the small step right…* Precise aspect of phenomenon you want to model is unclear * You fail to ask a meaningful question* You have already chosen a toolkit * This will prevent you from thinking deeply about the best way to answer your scientific question* You don’t have a clear goal * What do you want to get out the model?* You don’t have a potential experiment in mind * This will help concretize your objectives and think through the logic behind your goal**This should take no more than 30 minutes.** ----Step 2: Understanding the state of the art Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review should take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literatire but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary models?* What skill sets are required? * Do I need learn something before I can start? * Ensures that no important aspect is missed* Provides specific data sets / alternative models for comparison**Do this AFTER the tutorial** ----Step 3: Determining the basic ingredients
###Code
#@title Video 2: Determining basic ingredients
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="hfntEL8TVvY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients.Determine the basic ingredients. This will allow you to think deeper about what your model will need. It's a crucial step before you can formulate hypotheses because you first need to understand what the model will need to contain. There are 2 aspects you want to think about:1. What parameters / variables are needed in the model? * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements! This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. > Make sure to avoid the following pitfalls* I’m experienced, I don’t need to think about ingredients anymore * Or so you think…* I can’t think of any ingredients * Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure?* I have all inputs and outputs * Good! But what will link them? Thinking about that will start shaping your model and hypotheses* I can’t think of any links (= mechanisms) * You will acquire a library of potential mechanisms as you keep modeling and learning * But the literature will often give you hints through hypotheses * If you still can't think of links, then maybe you're missing ingredients?The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step.**This should take approximately 30 min.** ----Step 4: Formulating specific, mathematically defined hypotheses
###Code
#@title Video 3: Formulating a hypothesis
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Vc3g1XajLlc", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Think! 4: Formulating your hypothesisOnce you have your question and goal lines up, you have done a literature review (let's assume for now) and you have thought about ingredients needed for your model, you're now ready to start thinking about *specific* hypotheses.Formulating hypotheses really consists of two consecutive steps:1. You think about the hypotheses in words by relating ingredients identified in Step 3 * What is the model mechanism expected to do? * How are different parameters expected to influence model results?2. You then express these hypotheses in mathematical language by giving the ingredients identified in Step 3 specific variable names. * Be explicit, e.g. y(t)=f(x(t),k) but z(t) doesn’t influence yThere are also "structural hypotheses" that make assumptions on what model components you hypothesize will be crucial to capture the phenomenon at hand. **Important**: Formulating the hypotheses is the last step before starting to model. This step determines the model approach and ingredients. It provides a more detailed description of the question / goal from Step 1. The more precise the hypotheses, the easier the model will be to justify. > To succeed here, avoid the following pitfalls* I don’t need hypotheses, I will just play around with the model * Hypotheses help determine and specify goals. You can (and should) still play…* My hypotheses don’t match my question (or vice versa) * This is a normal part of the process! * You need to loop back to Step 1 and revisit your question / phenomenon / goals* I can’t write down a math hypothesis * Often that means you lack ingredients and/or clarity on the hypothesis * OR: you have a “structural” hypothesis, i.e. you expect a certain model component to be crucial in explaining the phenomenon / answering the question**This step should take about 30 min**. **We now have everything we need to actually start modelling!**
###Code
#@title Video 4: Next step!
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="bDqwjCYhyAg", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Tutorial 1: Framing the Question**Week 1, Day 2: Modeling Practice****By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on! DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy!
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own question *Please discuss the following for about 25 min*You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g. comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary modeling approaches?* What skill sets are required? * Do I need learn something before I can start? * Ensure that no important aspect is missed* Potentially provides specific data sets / alternative modeling approaches for comparison **Do this AFTER the tutorial** ---- Step 3: Determining the basic ingredients
###Code
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mq4y1x77s", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients *Please discuss the following for about 25 min*This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:1. What parameters / variables are needed?] * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements!This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsI’m experienced, I don’t need to think about ingredients anymore Or so you think… I can’t think of any ingredients Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure? I have all inputs and outputs Good! But what will link them? Thinking about that will start shaping your model and hypotheses I can’t think of any links (= mechanisms) You will acquire a library of potential mechanisms as you keep modeling and learning But the literature will often give you hints through hypotheses If you still can't think of links, then maybe you're missing ingredients? ---- Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1fh411h7aX", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where **E** stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 1 Modeling Practice: Framing the question__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by building a simple model.We will investigate a simple phenomenon, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsTutorial 1 (this notebook) will cover the steps 1-5, while Tutorial 2 will cover the steps 6-10.**TD**: All activities you should perform are labeled with **TD.**, which stands for "To Do", micro-tutorial number, activity number. They can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on!**Run**: Some code chunks' names start with "Run to ... (do something)". These chunks are purely to produce a graph or calculate a number. You do not need to look at or understand the code in those chunks. Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window,
FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition {condition:d} judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition {condition:d} prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis',\
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row',
figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd: red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion
# classified correctly: (1 - pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
idx = np.argmax(prop_correct[::-1]) + 1
plt.plot([thresholds[-idx]]*2, [0, 1], '--', color='xkcd:purple',
label='best classification')
plt.text(0.7, 0.8,
f"threshold:{thresholds[-idx]:0.2f}\
\ncorrect: {prop_correct[-idx]:0.2f}")
plt.xlabel('threshold')
plt.ylabel('proportion classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="W1D2_data.npz"
if not os.path.exists(fname):
!wget https://osf.io/c5xyf/download -O $fname
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 1: Investigating the phenomenon
###Code
# @title Video 1: Question
video = YouTubeVideo(id='x4b2-hZoyiY', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal**: formulate a good question!**Background: The train illusion**In the video you have learnt about the train illusion. In the same situation, we sometimes perceive our own train to be moving and sometimes the other train. How come our perception is ambiguous?We will build a simple model with the goal _to learn about the process of model building_ (i.e.: not to explain train illusions or get a correct model). To keep this manageable, we use a _simulated_ data set. For the same reason, this tutorial contains both coding and thinking activities. Doing both are essential for success.Imagine we get data from an experimentalist who collected _judgments_ on self motion and world motion, in two conditions. One where there was only world motion, and one where there was only self motion. In either case, the velocity increased from 0 to 1 m/s across 10 seconds with the same (fairly low) acceleration. Each of these conditions was recorded 100 times:![illustration of the conditions](https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/NMA-W1D2-fig01.png)Participants sit very still during the trials and at the end of each 10 s trial they are given two sliders, one to indicate the self-motion velocity (in m/s) and another to indicate the world-motion velocity (in m/s) _at the end of the interval_. TD 1.1: Form expectations about the experiment, using the phenomenaIn the experiment we get the participants _judgments_ of the velocities they experienced. In the Python chunk below, you should retain the numbers that represent your expectations on the participants' judgments. Remember that in the train illusion people usually experience either self motion or world motion, but not both. From the lists, remove those pairs of responses you think are unlikely to be the participants' judgments. The first two pairs of coordinates (1 m/s, 0 m/s, and 0 m/s, 1 m/s) are the stimuli, so those reflect judgments without illusion. Those should stay, but how do you think participants judge velocities when they _do_ experience the illusion?**Create Expectations**
###Code
# Create Expectations
###################################################################
# To complete the exercise, remove unlikely responses from the two
# lists. The lists act as X and Y coordinates for a scatter plot,
# so make sure the lists match in length.
###################################################################
world_vel_exp = [1, 0, 1, 0.5, 0.5, 0]
self_vel_exp = [0, 1, 1, 0.5, 0, 0.5]
# The code below creates a figure with your predictions:
my_plot_percepts(datasets={'expectations': {'world': world_vel_exp,
'self': self_vel_exp}})
###Output
_____no_output_____
###Markdown
**TD 1.2**: Compare Expectations to DataThe behavioral data from our experiment is in a 200 x 5 matrix called `judgments`, where each row indicates a trial.The first three columns in the `judgments` matrix represent the conditions in the experiment, and the last two columns list the velocity judgments.![illustration of the judgments matrix](https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/NMA-W1D2-fig02.png)The condition number can be 0 (world-motion condition, first 100 rows) or 1 (self-motion condition, last 100 rows). Columns 1 and 2 respectively list the true self- and world-motion velocities in the experiment. You will not have to use the first three columns.The motion judgements (columns 3 and 4) are the participants judgments of the self-motion velocity and world-motion velocity respectively, and should show the illusion. Let's plot the judgment data, along with the true motion of the stimuli in the experiment:
###Code
#@title
#@markdown Run to plot perceptual judgments
my_plot_percepts(datasets={'judgments': judgments}, plotconditions=True)
###Output
_____no_output_____
###Markdown
TD 1.3: Think about what the data is saying, by answering these questions:* How does it differ from your initial expectations? * Where are the clusters of data, roughly?* What does it mean that the some of the judgments from the world-motion condition are close to the self-motion stimulus and vice versa?* Why are there no data points in the middle?* What aspects of the data require explanation? --- Section 2: Understanding background
###Code
# @title Video 2: Background
video = YouTubeVideo(id='DcJ91h5Ekis', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Now that we have an interesting phenomenon, we gather background information which will refine our questions, and we lay the groundwork for developing scientific hypotheses. **Background: Motion Sensing**: Our self-motion percepts are based on our visual (optic flow) and vestibular (inner ear) sensing. Optic flow is the moving image on the retina caused by either self or world-motion. Vestibular signals are related to bodily self- movements only. The two signals can be understood as velocity in $m/s$ (optic flow) and acceleration in $m/s^2$ (vestibular signal). We'll first look at the ground truth which is stimulating the senses in our experiment.
###Code
#@markdown **Run to plot motion stimuli**
my_plot_motion_signals()
###Output
_____no_output_____
###Markdown
TD 2.1: Examine the differences between the conditions:* how are the visual inputs (optic flow) different between the conditions?* how are the vestibular signals different between the conditions?* how might the brain use these signals to determine there is self motion?* how might the brain use these signals to determine there is world motion?We can see that, in theory, we have enough information to disambiguate self-motion from world-motion using these signals. Let's go over the logic together. The visual signal is ambiguous, it will be non-zero when there is either self-motion or world-motion. The vestibular signal is specific, it’s only non-zero when there is self-motion. Combining these two signals should allow us to disambiguate the self-motion condition from the world-motion condition!* In the world-motion condition: The brain can simply compare the visual and vestibular signals. If there is visual motion AND NO vestibular motion, it must be that the world is moving but not the body/self = world-motion judgement.* In the self-motion condition: We can make a similar comparison. If there is both visual signals AND vestibular signals, it must be that the body/self is moving = self-motion judgement. **Background: Integrating signals**: To understand how the vestibular _acceleration_ signal could underlie the perception of self-motion _velocity_, we assume the brain integrates the signal. This also allows comparing the vestibular signal to the visual signal, by getting them in the same units. Read more about integration on [Wikipedia](https://en.wikipedia.org/wiki/Integral).Below we will approximate the integral using `np.cumsum()`. The discrete integral would be:$$v_t = \sum_{k=0}^t a_k\cdot\Delta t + v_0$$* $a(t)$ is acceleration as a function of time* $v(t)$ is velocity as a function of time* $\Delta t$ is equal to the sample interval of our recorded visual and vestibular signals (0.1 s).* $v_0$ is the _constant of integration_ which corresponds in the initial velocity at time $0$ (it would have to be known or remembered). Since that is always 0 in our experiment, we will leave it out from here on. Numerically Integrating a signalBelow is a chunk of code which uses the `np.cumsum()` function to integrate the acceleration that was used in our (simulated) experiment: `a` over `dt` in order to get a velocity signal `v`.
###Code
# Check out the code:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
# This does the integration of acceleration into velocity:
v = np.cumsum(a * dt)
my_plot_stimuli(t, a, v)
###Output
_____no_output_____
###Markdown
**Background: Sensory signals are noisy** In our experiment, we also recorded sensory signals in the participant. The data come in two 200 x 100 matrices:`opticflow` (with the visual signals)and`vestibular` (with the vestibular signals)In each of the signal matrices _rows_ (200) represent **trials**, in the same order as in the `judgments` matrix. _Columns_ (100) are **time samples**, representing 10 s collected with a 100 ms time bin. ![illustration of the signal matrices](https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/NMA-W1D2-fig03.png)Here we plot the data representing our 'sensory signals':* plot optic flow signals for self-motion vs world-motion conditions (should be the same)* plot vestibular signals for self-motion vs world-motion conditions (should be different)The x-axis is time in seconds, but the y-axis can be one of several, depending on what you do with the signals: $m/s^2$ (acceleration) or $m/s$ (velocity).
###Code
#@markdown **Run to plot raw noisy sensory signals**
# signals as they are:
my_plot_sensorysignals(judgments, opticflow, vestibular,
integrateVestibular=False)
###Output
_____no_output_____
###Markdown
TD 2.2: Understanding the problem of noisy sensory information**Answer the following questions:** * Is this what you expected? * In which of the two signals should we be able to see a difference between the conditions? * Can we use the data as it is to differentiate between the conditions? * Can we compare the the visual and vestibular motion signals when they're in different units? * What would the brain do differentiate the two conditions? Now that we know how to integrate the vestibular signal to get it into the same unit as the optic flow, we can see if it shows the pattern it should: a flat line in the world-motion condition and the correct velocity profile in the self-motion condition. Run the chunk of Python below to plot the sensory data again, but now with the vestibular signal integrated.
###Code
#@markdown **Run to compare true signals to sensory data**
my_plot_sensorysignals(judgments, opticflow, vestibular,
integrateVestibular=True, returnaxes=False,
addaverages=False, addGroundTruth=True)
###Output
_____no_output_____
###Markdown
The thick lines are the ground truth: the actual velocities in each of the conditions. With some effort, we can make out that _on average_ the vestibular signal does show the expected pattern after all. But there is also a lot of noise in the data. **Background Summary**: Now that we have examined the sensory signals, and understand how they relate to the ground truth. We see that there is enough information to _in principle_ disambiguate true self-motion from true world motion (there should be no illusion!). However, because the sensory information contains a lot of noise, i.e. it is unreliable, it could result in ambiguity.**_It is time to refine our research question:_*** Does the self-motion illusion occur due to unreliable sensory information? --- Section 3: Identifying ingredients
###Code
# @title Video 3: Ingredients
video = YouTubeVideo(id='ZQRtysK4OCo', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
TD 3.1: Understand the moving average function**Goal**: think about what ingredients we will need for our modelWe have access to sensory signals from the visual and vestibular systems that are used to estimate world motion and self motion.However, there are still two issues:1. _While sensory input can be noisy or unstable, perception is much more stable._2. _In the judgments there is either self motion or not._We will solve this by by using:1. _a moving average filter_ to stabilize our sensory signals2. _a threshold function_ to distinguish moving from non-movingOne of the simplest models of noise reduction is a moving average (sometimes: moving mean or rolling mean) over the recent past. In a discrete signal we specify the number of samples to use for the average (including the current one), and this is often called the _window size_. For more information on the moving average, check [this Wikipedia page](https://en.wikipedia.org/wiki/Moving_average).In this tutorial there is a simple running average function available:`my_moving_window(s, w)`: takes a signal time series $s$ and a window size $w$ as input and returns the moving average for all samples in the signal. Interactive Demo: Averaging windowThe code below picks one vestibular signal, integrates it to get a velocity estimate for self motion, and then filters. You can set the window size.Try different window sizes, then answer the following:* What is the maximum window size? The minimum?* Why does increasing the window size shift the curves? * How do the filtered estimates differ from the true signal?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
t = np.arange(0, 10, .1)
def refresh(trial_number=101, window=15):
# get the trial signal:
signal = vestibular[trial_number - 1, :]
# integrate it:
signal = np.cumsum(signal * .1)
# plot this signal
plt.plot(t, signal, label='integrated vestibular signal')
# filter:
signal = my_moving_window(signal, window=window, FUN=np.mean)
# plot filtered signal
plt.plot(t, signal, label=f'filtered with window: {window}')
plt.legend()
plt.show()
_ = widgets.interact(refresh, trial_number=(1, 200, 1), window=(5, 100, 1))
###Output
_____no_output_____
###Markdown
_Note: the function `my_moving_window()` is defined in this notebook in the code block at at the top called "Helper functions". It should be the first function there, so feel free to check how it works._ TD 3.2: Thresholding the self-motion vestibular signalComparing the integrated, filtered (accumulated) vestibular signals with a threshold should allow determining if there is self motion or not.To try this, we:1. Integrate the vestibular signal, apply a moving average filter, and take the last value of each trial's vestibular signal as an estimate of self-motion velocity. 2. Transfer the estimates of self-motion velocity into binary (0,1) decisions by comparing them to a threshold. Remember the output of logical comparators (>=<) are logical (truth/1, false/0). 1 indicates we think there was self-motion and 0 indicates otherwise. YOUR CODE HERE.3. We sort these decisions separately for conditions of real world-motion vs. real self-motion to determine 'classification' accuracy.4. To understand how the threshold impacts classfication accuracy, we do 1-3 for a range of thresholds.There is one line fo code to complete, which will implement step 2. Exercise 1: Threshold self-motion velocity into binary classifiction of self-motion
###Code
def my_threshold(selfmotion_vel_est, threshold):
"""
This function should calculate proportion self motion
for both conditions and the overall proportion
correct classifications.
Args:
selfmotion_vel_est (numpy.ndarray): A sequence of floats
indicating the estimated self motion for all trials.
threshold (float): A threshold for the estimate of self motion when
the brain decides there really is self motion.
Returns:
(numpy.ndarray): self-motion: yes or no.
"""
##############################################################
# Compare the self motion estimates to the threshold:
# Replace '...' with the proper code:
# Remove the next line to test your function
raise NotImplementedError("Modify my_threshold function")
##############################################################
# Compare the self motion estimates to the threshold
is_move = ...
return is_move
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial1_Solution_d278e3f8.py) Interactive Demo: Threshold vs. averaging windowNow we combine the classification steps 1-3 above, for a variable threshold. This will allow us to find the threshold that produces the most accurate classification of self-motion.We also add a 'widget' that controls the size of the moving average window. How does the optimal threshold vary with window size?
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
thresholds = np.round_(np.arange(0, 1.01, .01), 2)
v_ves = np.cumsum(vestibular * .1, axis=1)
def refresh(window=50):
selfmotion_vel_est = my_moving_window(v_ves, window=window,
FUN=np.mean)[:, 99]
[pselfmove_nomove,
pselfmove_move,
pcorrect] = my_moving_threshold(selfmotion_vel_est, thresholds)
my_plot_thresholds(thresholds, pselfmove_nomove, pselfmove_move, pcorrect)
_ = widgets.interact(refresh, window=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Let's unpack this: Ideally, in the self-motion condition (orange line) we should always detect self motion, and never in the world-motion condition (blue line). This doesn't happen, regardless of the settings we pick. However, we can pick a threshold that gives the highest proportion correctly classified trials, which depends on the window size, but is between 0.2 and 0.4. We'll pick the optimal threshold for a window size of 100 (the full signal) which is at 0.33. The ingredients we have collected for our model so far:* integration: get the vestibular signal in the same unit as the visual signal* running average: accumulate evidence over some time, so that perception is stable* decision if there was self motion (threshold) Since the velocity judgments are made at the end of the 10 second trials, it seems reasonable to use the sensory signals at the last sample to estimate what percept the participants said they had. --- Section 4: Formulating hypotheses
###Code
# @title Video 4: Hypotheses
video = YouTubeVideo(id='wgOpbfUELqU', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal**: formulate reasonable hypotheses in mathematical language using the ingredients identified in step 3. Write hypotheses as a function that we evaluate the model against later.**Question:** _Why do we experience the illusion?_We know that there are two real motion signals, and that these drive two sensory signals:> $w_v$: world motion (velocity magnitude)> > $s_v$: self motion (velocity magnitude)> > $s_{visual}$: optic flow signal> > $s_{vestibular}$: vestibular signalOptic flow is ambiguous, as both world motion and self motion drive visual motion.$$s_{visual} = w_v - s_v + noise$$Notice that world motion and self motion might cancel out. For example, if the train you are on, and the train you are looking at, both move at exactly the same speed.Vestibular signals are driven only by self motion, but _can_ be ambiguous when they are noisy. $$s_{vestibular} = s_v + noise$$**Combining Relationships**Without the sensory noise, these two relations are two linear equations, with two unknowns!This suggests the brain could simply "solve" for $s_v$ and $w_v$. However, given the noisy signals, sometimes these solutions will not be correct. Perhaps that is enough to explain the illusion? TD 4.1: Write out HypothesisUse the discussion and framing to write out your hypothesis in the form:> Illusory self-motion occurs when (give preconditions). We hypothesize it occurs because (explain how our hypothesized relationships work) TD 4.2: Relate hypothesis to ingredientsNow it's time to pull together the ingredients and relate them to our hypothesis. **For each trial we have:**| variable | description || ---- | ---- || $\hat{v_s}$ | **self motion judgment** (in m/s)|| $\hat{v_w}$ | **world motion judgment** (in m/s)|| $s_{ves}$ | **vestibular info** filtered and integrated vestibular information || $s_{opt}$ | **optic flow info** filtered optic flow estibular information || $z_s$ | **Self-motion detection** boolean value (True/False) indicating whether the vestibular info was above threshold or not |Answer the following questions by replotting your data and ingredients: * which of the 5 variables do our hypothesis say should be related?* what do you expect these plots to look like?
###Code
# Run to calculate variables
# these 5 lines calculate the main variables that we might use in the model
s_ves = my_moving_window(np.cumsum(vestibular * .1, axis=1), window=100)[:, 99]
s_opt = my_moving_window(opticflow, window=50)[:, 99]
v_s = s_ves
v_w = -s_opt - v_s
z_s = (s_ves > 0.33)
###Output
_____no_output_____
###Markdown
In the first chunk of code we plot histograms to compare the variability of the estimates of velocity we get from each of two sensory signals.**Plot histograms**
###Code
# Plot histograms
plt.figure(figsize=(8, 6))
plt.hist(s_ves, label='vestibular', alpha=0.5) # set the first argument here
plt.hist(s_opt, label='visual', alpha=0.5) # set the first argument here
plt.ylabel('frequency')
plt.xlabel('velocity estimate')
plt.legend(facecolor='xkcd:white')
plt.show()
###Output
_____no_output_____
###Markdown
This matches that the vestibular signals are noisier than visual signals.Below is generic code to create scatter diagrams. Use it to see if the relationships between variables are the way you expect them. For example, what is the relationship between the estimates of self motion and world motion, as we calculate them here? Exercise 2: Build a scatter plot
###Code
# this sets up a figure with some dotted lines on y=0 and x=0 for reference
plt.figure(figsize=(8, 8))
plt.plot([0, 0], [-0.5, 1.5], ':', color='xkcd:black')
plt.plot([-0.5, 1.5], [0, 0], ':', color='xkcd:black')
#############################################################################
# uncomment below and fill in with your code
#############################################################################
# determine which variables you want to look at (variable on the abscissa / x-axis, variable on the ordinate / y-axis)
# plt.scatter(...)
plt.xlabel('world-motion velocity [m/s]')
plt.ylabel('self-motion velocity [m/s]')
plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial1_Solution_f533b89d.py)*Example output:* Below is code that uses $z_s$ to split the trials in into two categories (i.e., $s_{ves}$ below or above threshold) and plot the mean in each category. Exercise 3: Split variable means bar graph
###Code
###################################
# Fill in source_var and uncomment
####################################
# source variable you want to check
source_var = ...
# below = np.mean(source_var[np.where(np.invert(z_s))[0]])
# above = np.mean(source_var[np.where(z_s)[0]] )
# plt.bar(x=[0, 1], height=[below, above])
plt.xticks([0, 1], ['below', 'above'])
plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial1_Solution_2bfeb67c.py)*Example output:* --- Section 5: Toolkit selection
###Code
# @title Video 5: Toolkit
video = YouTubeVideo(id='rsmnayVfJyM', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____ |
AutoRec/AutoRec_tf2.ipynb | ###Markdown
Reference* http://users.cecs.anu.edu.au/~u5098633/papers/www15.pdf* https://grouplens.org/datasets/movielens/1m/* https://vitobellini.github.io/posts/2018/01/03/how-to-build-a-recommender-system-in-tensorflow.html* https://github.com/npow/AutoRec
###Code
import random
import numpy as np
import pandas as pd
import tensorflow as tf
print('version info:')
print('Numpy\t', np.__version__)
print('Pandas\t', pd.__version__)
print('TF\t', tf.__version__)
BATCH_SIZE = 256
WEIGHT_DECAY = 5e-4
###Output
_____no_output_____
###Markdown
[Option] download MovieLens 1M dataset
###Code
import os
import wget
import zipfile
dirPath = '../dataset'
zipFilePath = os.path.join(dirPath, 'ml-1m.zip')
remoteRrl = 'https://files.grouplens.org/datasets/movielens/ml-1m.zip'
if not os.path.exists(dirPath):
os.makedirs(dirPath)
# download
wget.download(remoteRrl, zipFilePath)
# unzip files
with zipfile.ZipFile(zipFilePath, 'r') as zipRef:
zipRef.extractall(dirPath)
###Output
100% [..........................................................................] 5917549 / 5917549
###Markdown
load dataset
###Code
df = pd.read_csv('../dataset/ml-1m/ratings.dat', sep='::', engine='python', names=['UserID', 'MovieID', 'Rating', 'Timestamp'], header=None)
df = df.drop('Timestamp', axis=1)
numOfUsers = df.UserID.nunique()
numOfItems = df.MovieID.nunique()
df.head()
# Normalize rating in [0, 1]
ratings = df.Rating.values.astype(np.float)
scaledRatings = (ratings - min(ratings)) / (max(ratings) - min(ratings))
df.Rating = pd.DataFrame(scaledRatings)
df.head()
# user-item rating matrix
## U-AutoRec (users-based)
userItemRatingMatrix = df.pivot(index='UserID', columns='MovieID', values='Rating')
## I-AutoRec (items-based)
# userItemRatingMatrix = df.pivot(index='MovieID', columns='UserID', values='Rating')
userItemRatingMatrix.fillna(-1, inplace=True)
userItemRatingMatrix
# create tf.dataset
def getDataset(userItemRatingMatrix):
userItemRatingMatrix_np = userItemRatingMatrix.to_numpy(dtype=np.float32)
random.shuffle(userItemRatingMatrix_np)
# [train : valid : test] = [0.7 : 0.15 : 0.15]
numOfTrainSet = int(numOfUsers * 0.7)
numOfValidSet = int(numOfUsers * 0.15)
numOfTestSet = numOfUsers - numOfTrainSet - numOfValidSet
trainSet_np = userItemRatingMatrix_np[0:numOfTrainSet]
validSet_np = userItemRatingMatrix_np[numOfTrainSet:numOfTrainSet+numOfValidSet]
testSet_np = userItemRatingMatrix_np[numOfTrainSet+numOfValidSet:]
trainSet = tf.data.Dataset.from_tensor_slices(trainSet_np)
validSet = tf.data.Dataset.from_tensor_slices(validSet_np)
testSet = tf.data.Dataset.from_tensor_slices(testSet_np)
trainSet = trainSet.shuffle(buffer_size=BATCH_SIZE*8).batch(BATCH_SIZE)
validSet = validSet.batch(BATCH_SIZE)
testSet = testSet.batch(BATCH_SIZE)
return trainSet, validSet, testSet
trainSet, validSet, testSet = getDataset(userItemRatingMatrix)
###Output
_____no_output_____
###Markdown
build model
###Code
# build model
## tf.keras.Dense: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense
regularizer = tf.keras.regularizers.L2(WEIGHT_DECAY)
def getEncoder(numOfInput, numOfHidden1, numOfHidden2):
x = tf.keras.Input(shape=(numOfInput,))
out = tf.keras.layers.Dense(units=numOfHidden1, activation='sigmoid', kernel_regularizer=regularizer) (x)
out = tf.keras.layers.Dense(units=numOfHidden2, activation='sigmoid', kernel_regularizer=regularizer) (out)
return tf.keras.Model(inputs=[x], outputs=[out])
def getDecoder(numOfInput, numOfHidden1, numOfHidden2):
x = tf.keras.Input(shape=(numOfHidden2,))
out = tf.keras.layers.Dense(units=numOfHidden1, activation='sigmoid', kernel_regularizer=regularizer) (x)
out = tf.keras.layers.Dense(units=numOfInput, activation='sigmoid', kernel_regularizer=regularizer) (out)
return tf.keras.Model(inputs=[x], outputs=[out])
def getAutoEncoder(numOfInput, numOfHidden1, numOfHidden2):
encoder = getEncoder(numOfInput, numOfHidden1, numOfHidden2)
decoder = getDecoder(numOfInput, numOfHidden1, numOfHidden2)
return encoder, decoder
encoder, decoder = getAutoEncoder(numOfInput=userItemRatingMatrix.shape[-1], numOfHidden1=10, numOfHidden2=5)
###Output
_____no_output_____
###Markdown
training
###Code
# optimizer
optimizer = tf.keras.optimizers.SGD(learning_rate=5e-1, momentum=0.9)
# loss function
def getLoss(pred, gt, mask):
reconstructionLoss = tf.reduce_sum(tf.pow(pred - gt, 2) * mask, axis=-1) / tf.reduce_sum(mask, -1)
reconstructionLoss = tf.reduce_mean(reconstructionLoss)
return reconstructionLoss
# training with tf.GradientTape
from collections import defaultdict
weights = encoder.trainable_weights + decoder.trainable_weights
records = defaultdict(list)
numOfEpochs = 100
for epoch in range(numOfEpochs):
trainLosses = []
for step, batch in enumerate(trainSet):
mask = tf.cast(batch != -1, dtype=tf.float32)
# replace unrated value with mean of ratings
mean = tf.reduce_sum(batch * mask, axis=-1, keepdims=True) / tf.reduce_sum(mask, axis=-1, keepdims=True)
x = mask * batch + (1 - mask) * mean
with tf.GradientTape() as tape:
embedding = encoder(x, training=True)
pred = decoder(embedding, training=True)
loss = getLoss(pred=pred, gt=x, mask=mask)
grads = tape.gradient(loss, weights)
optimizer.apply_gradients(zip(grads, weights))
trainLosses.append(loss)
if epoch%5 == 0:
# calculate reconstruction loss from validation dataset
validLosses = []
for batch in validSet:
mask = tf.cast(batch != -1, dtype=tf.float32)
# replace unrated value with mean of ratings
mean = tf.reduce_sum(batch * mask, axis=-1, keepdims=True) / tf.reduce_sum(mask, axis=-1, keepdims=True)
x = mask * batch + (1 - mask) * mean
embedding = encoder(x, training=False)
pred = decoder(embedding, training=False)
validLoss = getLoss(pred=pred, gt=x, mask=mask)
validLosses.append(validLoss)
records['train'].append(tf.reduce_mean(trainLosses).numpy())
records['valid'].append(tf.reduce_mean(validLosses).numpy())
print(f'epoch:{epoch}, trainLoss:{records['train'][-1]}, validLoss:{records['valid'][-1]}')
import matplotlib.pyplot as plt
plt.plot(records['train'], label='train')
plt.plot(records['valid'], label='valid')
plt.legend()
plt.ylabel('Loss(RMSE)')
plt.xlabel('Epochs')
###Output
_____no_output_____
###Markdown
testingcalculate the RMSE of ratings
###Code
testLosses = []
for batch in testSet:
mask = tf.cast(batch != -1, dtype=tf.float32)
# replace unrated value with mean of ratings
mean = tf.reduce_sum(batch * mask, axis=-1, keepdims=True) / tf.reduce_sum(mask, axis=-1, keepdims=True)
x = mask * batch + (1 - mask) * mean
embedding = encoder.predict(x)
pred = decoder.predict(embedding)
testLoss = getLoss(pred=pred, gt=x, mask=mask)
testLosses.append(testLoss)
print(f'RMSE: {tf.reduce_mean(testLosses)}')
###Output
RMSE: 0.06309384107589722
|
Linear_Regresion_with_PyTorch_Titanic_Dataset.ipynb | ###Markdown
Finally, it is possible to verify that the model has the same loss than in the validation test above
###Code
test_loader = DataLoader(val_dataset, batch_size=32)
result = evaluate(model2, test_loader)
result
###Output
_____no_output_____
###Markdown
Step 9: Commit and Upload the Notebook
###Code
jovian.commit(project='titanic-linear-regression', environment=None, outputs=['titanic_linear_regresion_model.pth'])
###Output
[31m[jovian] Error: Failed to detect Jupyter notebook or Python script. Skipping..[0m
###Markdown
**Linear Regresion with PyTorch**The dataset selected for this notebook is the Titanic dataset. This dataset has several information where each row represents one person. The columns describe different attributes about the person including whether they survived, their age, their passenger-class, their sex, and the fare they paid. Finally the dataset is devided in training and evaluation data.The training and evaluation data can be found in the following links:* Training data: https://storage.googleapis.com/tf-datasets/titanic/train.csv* Evaluation data: https://storage.googleapis.com/tf-datasets/titanic/eval.csvThis notebook is focused in the prediction of wheather a passenger survived or not, and it is build based on the following steps:1. Install depencies2. Import modules3. Download and dataset analysis4. Preparing the dataset for training5. Creating a Linear Regression model6. Training the model and fitting the data7. Making predictions using the trained model8. Saving the model9. Commiting and uploading the notebookThis notebook is based on the concepts from the first two lectures and as part of the Assignment 2 of the course Deep Learning with PyTorch, which you can find in the following links:* Assignment 2 - Linear Regression: [click here](https://jovian.ml/fabianac07/assingment-02-linear-regression) * Lecture 1: [click here](https://www.youtube.com/watch?v=vo_fUOk-IKk&list=LLaHOyHOvwkyZZw6dTitN1Vw&index=2&t=443s)* Lecture 2: [click here](https://www.youtube.com/watch?v=4ZZrP68yXCI)* PyTorch basics: https://jovian.ml/aakashns/01-pytorch-basics* Linear Regression: https://jovian.ml/aakashns/02-linear-regression* Logistic Regression: https://jovian.ml/aakashns/03-logistic-regression* Linear regression (minimal): https://jovian.ml/aakashns/housing-linear-minimal* Logistic regression (minimal): https://jovian.ml/aakashns/mnist-logistic-minimal Step 1: Install dependencies
###Code
# Uncomment and run the commands below if imports fail
# !conda install numpy pytorch torchvision cpuonly -c pytorch -y
# !pip install matplotlib --upgrade --quiet
!pip install jovian --upgrade --quiet
###Output
[?25l
[K |████ | 10kB 22.4MB/s eta 0:00:01
[K |███████▉ | 20kB 1.8MB/s eta 0:00:01
[K |███████████▉ | 30kB 2.4MB/s eta 0:00:01
[K |███████████████▊ | 40kB 2.6MB/s eta 0:00:01
[K |███████████████████▋ | 51kB 2.0MB/s eta 0:00:01
[K |███████████████████████▋ | 61kB 2.3MB/s eta 0:00:01
[K |███████████████████████████▌ | 71kB 2.6MB/s eta 0:00:01
[K |███████████████████████████████▍| 81kB 2.8MB/s eta 0:00:01
[K |████████████████████████████████| 92kB 2.4MB/s
[?25h Building wheel for uuid (setup.py) ... [?25l[?25hdone
###Markdown
Step 2: Import Modules
###Code
import torch
import jovian
import torchvision
import torch.nn as nn
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import torch.nn.functional as F
from torchvision.datasets.utils import download_url
from torch.utils.data import DataLoader, TensorDataset, random_split
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Step 3: Download and Dataset Analysis To download the dataset the pandas `read_csv()` method is used. This method will download the dataset and turn it into a table.
###Code
''' Load dataset '''
dataset_training = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dataset_testing = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
###Output
_____no_output_____
###Markdown
It is possible to observe the data using pandas methods. The `.head()` method shows the first 5 items in the dataset.
###Code
dataset_training.head()
dataset_testing.head()
###Output
_____no_output_____
###Markdown
The `.describe()` method shows an statistical analysis of the dataset.
###Code
dataset_training.describe()
dataset_testing.describe()
###Output
_____no_output_____
###Markdown
Machine learning is all about data, for this reason is very important to know what kind and how many data is available. For this reason is important to answer the following questions:***Q: How many rows does the training dataset have?***
###Code
num_rows = len(dataset_training.index)
print('The number of rows on the training dataset is: ', num_rows)
###Output
The number of rows on the training dataset is: 627
###Markdown
***Q: How many columns does the training dataset have?***
###Code
num_cols = len(dataset_training.columns)
print('The number of columns on the training dataset is: ', num_cols)
###Output
The number of columns on the training dataset is: 10
###Markdown
***Q: What are the column titles of the input variables?***
###Code
input_cols = dataset_testing.columns.values[1:]
print('The columns titles of the input variables on the testing dataset are: ')
print(input_cols)
###Output
The columns titles of the input variables on the testing dataset are:
['sex' 'age' 'n_siblings_spouses' 'parch' 'fare' 'class' 'deck'
'embark_town' 'alone']
###Markdown
***Q: What is the column title of the output/target variable in the training dataset?***---
###Code
output_cols = dataset_training.columns.values[:1]
print('The target column title on the training dataset is: ', output_cols)
###Output
The target column title on the training dataset is: ['survived']
###Markdown
***Q: Which of the input columns are categorical (on-numerical variables) in the training dataset?***
###Code
categorical_cols = dataset_training.select_dtypes(include=object).columns.values
print('The non-numerical columns in the training dataset are: ')
print(categorical_cols)
###Output
The non-numerical columns in the training dataset are:
['sex' 'class' 'deck' 'embark_town' 'alone']
###Markdown
***Q: How would you plot the distributions of survivors and age of the passengers?***
###Code
sns.set_style('darkgrid')
plt.title('Distribution of survivors among passenges in training dataset')
sns.distplot(dataset_training.survived, bins=2, kde=False);
sns.set_style('darkgrid')
plt.title('Distribution of age among passenges in training dataset')
sns.distplot(dataset_training.age, bins=20, kde=False);
###Output
_____no_output_____
###Markdown
Step 4: Preparing the Dataset for TrainingBy this point it was observed that the data has been handle in Pandas form. To process the training data, it has to be converted into PyTorch tensors. To do so, it is necessary to convert it to numpy arrayas, and the following function will do that for us:*To understand how tha categorical data is converted into numbers refer to [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html).*
###Code
def dataframe_to_arrays(dataframe):
# Make a copy of the original dataframe
dataframe1 = dataframe.copy(deep=True)
# Convert non-numeric categorical columns to numbers
for col in categorical_cols:
dataframe1[col] = dataframe1[col].astype('category').cat.codes
# Extract input & outupts as numpy arrays
inputs_array = dataframe1[input_cols].to_numpy()
targets_array = dataframe1[output_cols].to_numpy()
return inputs_array, targets_array
###Output
_____no_output_____
###Markdown
Now it is just a matter to pass trough the `dataframe_to_arrays` function the training dataset.
###Code
inputs_array_training, targets_array_training = dataframe_to_arrays(dataset_training)
print(inputs_array_training)
print(targets_array_training)
inputs_array_testing, targets_array_testing = dataframe_to_arrays(dataset_testing)
print(inputs_array_testing)
print(targets_array_testing)
###Output
[[ 1. 35. 0. ... 6. 2. 1.]
[ 1. 54. 0. ... 4. 2. 1.]
[ 0. 58. 0. ... 2. 2. 1.]
...
[ 0. 39. 0. ... 6. 1. 0.]
[ 1. 27. 0. ... 6. 2. 1.]
[ 1. 26. 0. ... 2. 0. 1.]]
[[0]
[0]
[1]
[1]
[1]
[1]
[0]
[0]
[0]
[1]
[1]
[0]
[1]
[0]
[1]
[0]
[0]
[1]
[1]
[0]
[0]
[0]
[1]
[0]
[0]
[0]
[0]
[0]
[1]
[1]
[0]
[1]
[0]
[1]
[0]
[1]
[0]
[0]
[1]
[0]
[1]
[1]
[0]
[0]
[0]
[1]
[0]
[0]
[1]
[0]
[0]
[0]
[0]
[0]
[1]
[0]
[0]
[1]
[0]
[0]
[1]
[1]
[0]
[1]
[0]
[0]
[1]
[0]
[0]
[1]
[1]
[0]
[0]
[1]
[0]
[1]
[0]
[1]
[1]
[1]
[0]
[0]
[0]
[0]
[1]
[1]
[1]
[0]
[1]
[1]
[1]
[0]
[0]
[1]
[1]
[1]
[1]
[0]
[0]
[0]
[0]
[1]
[0]
[0]
[1]
[0]
[1]
[1]
[1]
[0]
[1]
[0]
[0]
[0]
[1]
[0]
[0]
[0]
[0]
[1]
[0]
[0]
[1]
[1]
[0]
[0]
[1]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[1]
[0]
[0]
[0]
[1]
[0]
[1]
[0]
[1]
[0]
[0]
[0]
[0]
[0]
[1]
[0]
[1]
[1]
[0]
[0]
[0]
[0]
[1]
[0]
[1]
[0]
[1]
[1]
[0]
[0]
[1]
[0]
[1]
[0]
[0]
[0]
[0]
[0]
[1]
[0]
[1]
[0]
[0]
[0]
[0]
[1]
[0]
[0]
[1]
[1]
[0]
[0]
[0]
[0]
[0]
[1]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[1]
[0]
[0]
[1]
[1]
[0]
[0]
[0]
[0]
[1]
[0]
[0]
[1]
[1]
[0]
[1]
[0]
[0]
[1]
[0]
[1]
[1]
[0]
[0]
[0]
[0]
[1]
[0]
[1]
[1]
[0]
[1]
[0]
[1]
[1]
[0]
[0]
[0]
[0]
[1]
[1]
[1]
[1]
[1]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[1]
[0]
[0]
[1]
[1]
[0]
[0]
[0]
[1]]
###Markdown
The next thing to do is to convert the numpy arrays into PyTorch tensors.*NOTE: The tensor data type must be `torch.float32`*
###Code
# Import from numpy arrays to pytorch
inputs_training = torch.from_numpy(inputs_array_training).type(torch.float32)
targets_training = torch.from_numpy(targets_array_training).type(torch.float32)
inputs_training.dtype, targets_training.dtype
inputs_testing = torch.from_numpy(inputs_array_testing).type(torch.float32)
targets_testing = torch.from_numpy(targets_array_testing).type(torch.float32)
inputs_testing.dtype, targets_testing.dtype
###Output
_____no_output_____
###Markdown
Now, create PyTorch datasets for training and validation
###Code
train_dataset = TensorDataset(inputs_training, targets_training)
val_dataset = TensorDataset(inputs_testing, targets_testing)
###Output
_____no_output_____
###Markdown
Finally, create dataloaders for training and validation.To do so, it is necessary to select a batch size for the data loader. This means that the loader will not feed the whole dataset to the model at once, but it will feed the model with small entries.
###Code
# Batch Size for data loader
batch_size = 64
train_loader = DataLoader(train_dataset, batch_size, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size*2)
###Output
_____no_output_____
###Markdown
It is possible to verify that the data loader worked well by running the following loop:
###Code
print('Data inside the training data loader: ')
for x_i, y_i in train_loader:
print("inputs", x_i)
print("targets", y_i)
break
print('Data inside the validation data loader: ')
for x_o, y_o in val_loader:
print("inputs", x_o)
print("targets", y_o)
break
###Output
Data inside the validation data loader:
inputs tensor([[ 1., 35., 0., ..., 6., 2., 1.],
[ 1., 54., 0., ..., 4., 2., 1.],
[ 0., 58., 0., ..., 2., 2., 1.],
...,
[ 1., 28., 0., ..., 6., 2., 1.],
[ 0., 50., 0., ..., 6., 2., 1.],
[ 1., 28., 0., ..., 6., 1., 1.]])
targets tensor([[0.],
[0.],
[1.],
[1.],
[1.],
[1.],
[0.],
[0.],
[0.],
[1.],
[1.],
[0.],
[1.],
[0.],
[1.],
[0.],
[0.],
[1.],
[1.],
[0.],
[0.],
[0.],
[1.],
[0.],
[0.],
[0.],
[0.],
[0.],
[1.],
[1.],
[0.],
[1.],
[0.],
[1.],
[0.],
[1.],
[0.],
[0.],
[1.],
[0.],
[1.],
[1.],
[0.],
[0.],
[0.],
[1.],
[0.],
[0.],
[1.],
[0.],
[0.],
[0.],
[0.],
[0.],
[1.],
[0.],
[0.],
[1.],
[0.],
[0.],
[1.],
[1.],
[0.],
[1.],
[0.],
[0.],
[1.],
[0.],
[0.],
[1.],
[1.],
[0.],
[0.],
[1.],
[0.],
[1.],
[0.],
[1.],
[1.],
[1.],
[0.],
[0.],
[0.],
[0.],
[1.],
[1.],
[1.],
[0.],
[1.],
[1.],
[1.],
[0.],
[0.],
[1.],
[1.],
[1.],
[1.],
[0.],
[0.],
[0.],
[0.],
[1.],
[0.],
[0.],
[1.],
[0.],
[1.],
[1.],
[1.],
[0.],
[1.],
[0.],
[0.],
[0.],
[1.],
[0.],
[0.],
[0.],
[0.],
[1.],
[0.],
[0.],
[1.],
[1.],
[0.],
[0.],
[1.],
[0.]])
###Markdown
Step 5: Creating the Linear Regression ModelThis a fairly straightforward linear regression model. First, it is necessary to define how many columns are in the input and output variables.
###Code
input_size = len(input_cols)
output_size = len(output_cols)
input_cols, output_cols
###Output
_____no_output_____
###Markdown
Second, it is necesary to define a class where the model will be build.
###Code
class TitanicModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, output_size)
def forward(self, xb):
out = self.linear(xb)
return out
def training_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calcuate loss
#loss = F.l1_loss(out, targets)
loss = F.mse_loss(out, targets)
return loss
def validation_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calculate loss
#loss = F.l1_loss(out, targets)
loss = F.mse_loss(out, targets)
return {'val_loss': loss.detach()}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean()
return {'val_loss': epoch_loss.item()}
def epoch_end(self, epoch, result, num_epochs):
# Print result every 20th epoch
if (epoch+1) % 20 == 0 or epoch == num_epochs-1:
print("Epoch [{}], val_loss: {:.4f}".format(epoch+1, result['val_loss']))
###Output
_____no_output_____
###Markdown
Then, it is just a matter of build the model and check its initial weights and biases.
###Code
model = TitanicModel()
list(model.parameters())
###Output
_____no_output_____
###Markdown
Step 6: Training the Model and Fitting the DataTo train the model, it is necesary to define the `evaluate` function, which will perform the validation of the model, and the `fit` function, which will perform the training process.
###Code
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result, epochs)
history.append(result)
return history
###Output
_____no_output_____
###Markdown
The `fit` function records the validation loss and metric from each epoch and returns a history of the training process. This is useful for debuggin & visualizing the training process. Configurations like batch size and learning rate need to be selected in advance while training machine learning models, and are called hyperparameters. Selecting the right hyperparameters is critical for training an accurate model within a reasonable amount of time, and is an active area of research and experimentation. Feel free to try different learning rates and see how it affects the training process.
###Code
result = evaluate(model, val_loader) # Use the the evaluate function
print(result)
###Output
{'val_loss': 185.7541961669922}
###Markdown
At this point the model is ready to be trained.It may be necessary to run the training loops many times, for different number of epochs and with different learning rates, to get a good result. *NOTE: If the loss becomes too large (or `nan`), it might be necessary to re-initialize the model by running the cell `model = InsuranceModel()`.*
###Code
epochs = 100
lr = 1e-4
history1 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 100
lr = 1e-4
history2 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 100
lr = 1e-5
history3 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 100
lr = 1e-5
history4 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 100
lr = 1e-6
history5 = fit(epochs, lr, model, train_loader, val_loader)
###Output
Epoch [20], val_loss: 0.4062
Epoch [40], val_loss: 0.4060
Epoch [60], val_loss: 0.4058
Epoch [80], val_loss: 0.4057
Epoch [100], val_loss: 0.4056
###Markdown
Once the model is trained, it might be necesary to report the final validation loss of the model.
###Code
val_loss = history5[-1]
print('The final validation loss is: ', val_loss)
###Output
The final validation loss is: {'val_loss': 0.405606746673584}
###Markdown
If it is necessary to plot the whole training history, it would be just a matter to run the following lines.
###Code
whole_history = [result] + history1 + history2 + history3 + history4 + history5
losses = [r['val_loss'] for r in whole_history]
plt.plot(losses)
plt.grid('on')
plt.title('Loss Value vs Training Epochs')
plt.xlabel('epochs')
plt.ylabel('losses')
###Output
_____no_output_____
###Markdown
Step 7: Making Predictions Using the Trained ModelTo make predictions using the trained model it would be necesary to define the `predict_single` function, which will take as input a row from the validation dataset and make the prediction of whether the passenger survived or not.
###Code
def predict_single(input, target, model):
inputs = input.unsqueeze(0)
predictions = model(input) # fill this
prediction = predictions[0].detach()
print("Input:", input)
print("Target:", target)
print("Prediction:", prediction)
if prediction >= 0.5:
print('The passenger survived!')
else:
print('Tha passenger did not survived...')
###Output
_____no_output_____
###Markdown
To see if the predictions are correct, it is possible to print the frist 5 rows of the testing dataset
###Code
dataset_testing.head()
###Output
_____no_output_____
###Markdown
Now, it is just a matter of selecting which row of the validation dataset will pass throught the `predict_single` function.
###Code
input, target = val_dataset[0]
predict_single(input, target, model)
input, target = val_dataset[1]
predict_single(input, target, model)
input, target = val_dataset[2]
predict_single(input, target, model)
###Output
Input: tensor([ 0.0000, 58.0000, 0.0000, 0.0000, 26.5500, 0.0000, 2.0000, 2.0000,
1.0000])
Target: tensor([1.])
Prediction: tensor(0.5233)
The passenger survived!
###Markdown
Step 8: Saving the ModelTo save the model, it is just a matter to run the following lines. This will save the model in the local folder and it is possible to load it in the future. This will save time and computational resourses.
###Code
torch.save(model.state_dict(), 'titanic_linear_regresion_model.pth')
###Output
_____no_output_____
###Markdown
The `.state_dict` method returns an `OrderedDict` containing all the weights and bias matrices mapped to the right attributes of the model.
###Code
model.state_dict()
###Output
_____no_output_____
###Markdown
To load the model weights, it is just a matter of creating a new object of the class `TitanicModel`, and use the `.load_state_dict` method.
###Code
model2 = TitanicModel()
model2.load_state_dict(torch.load('titanic_linear_regresion_model.pth'))
model2.state_dict()
###Output
_____no_output_____ |
nli_01_task_and_data.ipynb | ###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [NLI model landscape](NLI-model-landscape)1. [Set-up](Set-up)1. [Properties of the corpora](Properties-of-the-corpora) 1. [SNLI properties](SNLI-properties) 1. [MultiNLI properties](MultiNLI-properties)1. [Working with SNLI and MultiNLI](Working-with-SNLI-and-MultiNLI) 1. [Readers](Readers) 1. [The NLIExample class](The-NLIExample-class) 1. [Labels](Labels) 1. [Tree representations](Tree-representations)1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://link.springer.com/chapter/10.1007%2F11736790_9), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||---------|---------------|------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](nli_wordentail_bakeoff.ipynb), we looked at a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on two large, human-labeled, relatively naturalistic entailment corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). They have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. NLI model landscape Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.zip), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
###Output
_____no_output_____
###Markdown
Properties of the corporaFor both SNLI and MultiNLI, MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. SNLI properties * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa * Top scores currently around 89%. MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides * Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label * Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with SNLI and MultiNLI ReadersThe following readers should make it easy to work with these corpora: * `nli.SNLITrainReader`* `nli.SNLIDevReader`* `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The base class is `nli.NLIReader`, which should be easy to use to define additional readers.If you did change `data_home`, `snli_home`, or `multinli_home` above, then you'll need to call these readers with `dirname` as an argument, where `dirname` is your `snli_home` or `multinli_home`, as appropriate.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes allow this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we trade efficiency for precision in the number of cases we return; see the implementation for details.) The NLIExample classAll of the readers have a `read` method that yields `NLIExample` example instances, which have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree`
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
snli_ex
###Output
_____no_output_____
###Markdown
Labels
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Tree representations Both corpora contain __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
import matplotlib
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are a tokenized version of the example:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||---------|---------------|------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we looked at a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v0.1")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we trade efficiency for precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
snli_ex
###Output
_____no_output_____
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The precise wording is more informally, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this datset are conceptually the same as for ` SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series([ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||---------|---------------|------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we looked at a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v0.1")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we trade efficiency for precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=False` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
snli_ex
###Output
_____no_output_____
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The precise wording is more informally, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this datset are conceptually the same as for ` SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series([ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2021"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://www.aclweb.org/anthology/W17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
The Ghostscript executable isn't found.
See http://web.mit.edu/ghostscript/www/Install.htm
If you're using a Mac, you can try installing
https://docs.brew.sh/Installation then `brew install ghostscript`
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
The Ghostscript executable isn't found.
See http://web.mit.edu/ghostscript/www/Install.htm
If you're using a Mac, you can try installing
https://docs.brew.sh/Installation then `brew install ghostscript`
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2022"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
from datasets import load_dataset
DATA_HOME = os.path.join("data", "nlidata")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://www.aclweb.org/anthology/W17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI
###Code
snli = load_dataset("snli")
###Output
Reusing dataset snli (/Users/cgpotts/.cache/huggingface/datasets/snli/plain_text/1.0.0/1f60b67533b65ae0275561ff7828aad5ee4282d0e6f844fd148d05d3c6ea251b)
###Markdown
The dataset has three splits:
###Code
snli.keys()
###Output
_____no_output_____
###Markdown
The class `nli.NLIReader` is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. This is supported with the keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.NLIReader(snli['train'], samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __label__: `str`* __premise__: `str`* __hypothesis__: `str`Note: the original SNLI distribution includes a number of other valuable fields, including identifiers for the original caption in the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/), parses for the examples, and annotation distributions for the validation set. Perhaps someone could update [the dataset on Hugging Face](https://huggingface.co/datasets/snli) to provide access to this information! The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.label for ex in nli.NLIReader(
snli['train'], filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.NLIReader(snli['train']).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 'neutral'})
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI
###Code
mnli = load_dataset("multi_nli")
###Output
Using custom data configuration default
Reusing dataset multi_nli (/Users/cgpotts/.cache/huggingface/datasets/multi_nli/default/0.0.0/591f72eb6263d1ab527561777936b199b714cda156d35716881158a2bd144f39)
###Markdown
For MultiNLI, we have the following splits: * `train`* `validation_matched`* `validation_mismatched`
###Code
mnli.keys()
###Output
_____no_output_____
###Markdown
The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.NLIReader(mnli['train'], samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have nearly all the attributes that SNLI is supposed to have!* __promptID__: `str`* __label__: `str`* __pairID__: `str`* __premise__: `str`* __premise_binary_parse__: `nltk.tree.Tree`* __premise_parse__: `nltk.tree.Tree`* __hypothesis__: `str`* __hypothesis_binary_parse__: `nltk.tree.Tree`* __hypothesis_parse__: `nltk.tree.Tree`The only field that is unfortunately missing is __annotator_labels__, which gives all five labels chosen by annotators for the two dev splits. Perhaps someone could [create a PR to bring these fields back in](https://huggingface.co/datasets/multi_nli)! The full label distribution for the train split:
###Code
multinli_labels = pd.Series(
[ex.label for ex in nli.NLIReader(
mnli['validation_mismatched'], filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label. The original corpus distribution does contain some unlabeled examples in its dev-sets, but those seem to have been removed in the Hugging Face distribution. As a result, the value of the `filter_unlabeled` parameter has no effect for `mnli`. Let's look at a specific example:
###Code
mnli_iterator = iter(nli.NLIReader(mnli['train']).read())
mnli_ex = next(mnli_iterator)
###Output
_____no_output_____
###Markdown
As you can see, there are three versions of the premise and hypothesis sentences:1. Regular string representations of the data2. Unlabeled binary parses3. Labeled parses
###Code
mnli_ex.premise
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
mnli_ex.premise_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
mnli_ex.premise_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
mnli_ex.premise_parse.leaves()
###Output
_____no_output_____
###Markdown
Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.premise)
print(ex.label)
print(ex.hypothesis)
matched_ann = nli.read_annotated_subset(
matched_ann_filename,
mnli['validation_matched'])
view_random_example(matched_ann, random_state=23)
###Output
pairID: 132936n
['#NEGATION', '#COREF']
This one-at-a-time, uncoordinated series of regulatory requirements for the power industry is not the optimal approach for the environment, the power generation sector, or American consumers.
entailment
It is not the optimal approach.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI
###Code
anli = load_dataset("anli")
###Output
Reusing dataset anli (/Users/cgpotts/.cache/huggingface/datasets/anli/plain_text/0.1.0/aabce88453b06dff21c201855ea83283bab0390bff746deadb30b65695755c0b)
###Markdown
For ANLI, we have a lot of options. Because it is distributed in three rounds, and the rounds can be used independently or pooled:
###Code
anli.keys()
###Output
_____no_output_____
###Markdown
Here is the fully pooled train setting:
###Code
anli_pooled_reader = nli.NLIReader(
anli['train_r1'], anli['train_r2'], anli['train_r3'],
filter_unlabeled=False)
anli_pooled_labels = pd.Series([ex.label for ex in anli_pooled_reader.read()])
anli_pooled_labels.value_counts()
for rounds in ((1,), (2,), (3,), (1,2,3)):
splits = [anli['train_r{}'.format(i)] for i in rounds]
count = len(list(nli.NLIReader(*splits).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __premise__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pairThe ANLI distribution contains additional fields that are unfortunately left out of the Hugging Face distribution:* __model_label__: the label predicted by the model used in the current round* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAs with the other datasets, it would be a wonderful service to the field to [improve the interface](https://huggingface.co/datasets/anli)!
###Code
anli_ex = next(iter(nli.NLIReader(anli['dev_r3']).read()))
anli_ex
###Output
_____no_output_____
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||---------|---------------|------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we looked at a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v0.1")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we trade efficiency for precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
snli_ex
###Output
_____no_output_____
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The precise wording is more informally, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this datset are conceptually the same as for ` SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series([ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2021"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://www.aclweb.org/anthology/W17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: Task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [NLI model landscape](NLI-model-landscape)1. [Set-up](Set-up)1. [Properties of the corpora](Properties-of-the-corpora) 1. [SNLI properties](SNLI-properties) 1. [MultiNLI properties](MultiNLI-properties)1. [Working with SNLI and MultiNLI](Working-with-SNLI-and-MultiNLI) 1. [Readers](Readers) 1. [The NLIExample class](The-NLIExample-class) 1. [Labels](Labels) 1. [Tree representations](Tree-representations)1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://link.springer.com/chapter/10.1007%2F11736790_9), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||---------|---------------|------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](nli_wordentail_bakeoff.ipynb), we looked at a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on two large, human-labeled, relatively naturalistic entailment corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). They have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. NLI model landscape Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.zip), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
###Output
_____no_output_____
###Markdown
Properties of the corporaFor both SNLI and MultiNLI, MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. SNLI properties * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa * Top scores currently around 89%. MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides * Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label * Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with SNLI and MultiNLI ReadersThe following readers should make it easy to work with these corpora: * `nli.SNLITrainReader`* `nli.SNLIDevReader`* `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The base class is `nli.NLIReader`, which should be easy to use to define additional readers.If you did change `data_home`, `snli_home`, or `multinli_home` above, then you'll need to call these readers with `dirname` as an argument, where `dirname` is your `snli_home` or `multinli_home`, as appropriate.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes allow this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we trade efficiency for precision in the number of cases we return; see the implementation for details.) The NLIExample classAll of the readers have a `read` method that yields `NLIExample` example instances, which have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree`
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
snli_ex
###Output
_____no_output_____
###Markdown
Labels
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Tree representations Both corpora contain __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are a tokenized version of the example:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations):
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 2367n
['#LONG_SENTENCE']
On the window above the sink a small container is stuffed with bits of leftovers--the red berries of barberry, small twigs of willow, cuttings of hinoki cypress with its fruits attached, and the pendulous leathery seed pods of wisteria.
neutral
There is a small jar on the window.
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
# snli_labels = pd.Series(
# [ex.gold_label for ex in nli.SNLITrainReader(
# SNLI_HOME, filter_unlabeled=False).read()])
#snli_labels.value_counts()
snli_labels[:10]
ctr = 0
for ex in nli.SNLITrainReader(SNLI_HOME, filter_unlabeled=False).read():
print(ex)
if ctr > 10:
break
ctr+=1
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
"NLIExample({'annotator_labels': ['contradiction'], 'captionID': '3416050480.jpg#4', 'gold_label': 'contradiction', 'pairID': '3416050480.jpg#4r1c', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is at a diner, ordering an omelette.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', [Tree('X', [Tree('X', ['is', Tree('X', ['at', Tree('X', ['a', 'diner'])])]), ',']), Tree('X', ['ordering', Tree('X', ['an', 'omelette'])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('PP', [Tree('IN', ['at']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['diner'])])]), Tree(',', [',']), Tree('S', [Tree('VP', [Tree('VBG', ['ordering']), Tree('NP', [Tree('DT', ['an']), Tree('NN', ['omelette'])])])])]), Tree('.', ['.'])])])})
"NLIExample({'annotator_labels': ['entailment'], 'captionID': '3416050480.jpg#4', 'gold_label': 'entailment', 'pairID': '3416050480.jpg#4r1e', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is outdoors, on a horse.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', [Tree('X', [Tree('X', ['is', 'outdoors']), ',']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('ADVP', [Tree('RB', ['outdoors'])]), Tree(',', [',']), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('.', ['.'])])])})
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '2267923837.jpg#2', 'gold_label': 'neutral', 'pairID': '2267923837.jpg#2r1n', 'sentence1': 'Children smiling and waving at camera', 'sentence1_binary_parse': Tree('X', ['Children', Tree('X', [Tree('X', [Tree('X', ['smiling', 'and']), 'waving']), Tree('X', ['at', 'camera'])])]), 'sentence1_parse': Tree('ROOT', [Tree('NP', [Tree('S', [Tree('NP', [Tree('NNP', ['Children'])]), Tree('VP', [Tree('VBG', ['smiling']), Tree('CC', ['and']), Tree('VBG', ['waving']), Tree('PP', [Tree('IN', ['at']), Tree('NP', [Tree('NN', ['camera'])])])])])])]), 'sentence2': 'They are smiling at their parents', 'sentence2_binary_parse': Tree('X', ['They', Tree('X', ['are', Tree('X', ['smiling', Tree('X', ['at', Tree('X', ['their', 'parents'])])])])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('PRP', ['They'])]), Tree('VP', [Tree('VBP', ['are']), Tree('VP', [Tree('VBG', ['smiling']), Tree('PP', [Tree('IN', ['at']), Tree('NP', [Tree('PRP$', ['their']), Tree('NNS', ['parents'])])])])])])])})
"NLIExample({'annotator_labels': ['entailment'], 'captionID': '2267923837.jpg#2', 'gold_label': 'entailment', 'pairID': '2267923837.jpg#2r1e', 'sentence1': 'Children smiling and waving at camera', 'sentence1_binary_parse': Tree('X', ['Children', Tree('X', [Tree('X', [Tree('X', ['smiling', 'and']), 'waving']), Tree('X', ['at', 'camera'])])]), 'sentence1_parse': Tree('ROOT', [Tree('NP', [Tree('S', [Tree('NP', [Tree('NNP', ['Children'])]), Tree('VP', [Tree('VBG', ['smiling']), Tree('CC', ['and']), Tree('VBG', ['waving']), Tree('PP', [Tree('IN', ['at']), Tree('NP', [Tree('NN', ['camera'])])])])])])]), 'sentence2': 'There are children present', 'sentence2_binary_parse': Tree('X', ['There', Tree('X', [Tree('X', ['are', 'children']), 'present'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('EX', ['There'])]), Tree('VP', [Tree('VBP', ['are']), Tree('NP', [Tree('NNS', ['children'])]), Tree('ADVP', [Tree('RB', ['present'])])])])])})
"NLIExample({'annotator_labels': ['contradiction'], 'captionID': '2267923837.jpg#2', 'gold_label': 'contradiction', 'pairID': '2267923837.jpg#2r1c', 'sentence1': 'Children smiling and waving at camera', 'sentence1_binary_parse': Tree('X', ['Children', Tree('X', [Tree('X', [Tree('X', ['smiling', 'and']), 'waving']), Tree('X', ['at', 'camera'])])]), 'sentence1_parse': Tree('ROOT', [Tree('NP', [Tree('S', [Tree('NP', [Tree('NNP', ['Children'])]), Tree('VP', [Tree('VBG', ['smiling']), Tree('CC', ['and']), Tree('VBG', ['waving']), Tree('PP', [Tree('IN', ['at']), Tree('NP', [Tree('NN', ['camera'])])])])])])]), 'sentence2': 'The kids are frowning', 'sentence2_binary_parse': Tree('X', [Tree('X', ['The', 'kids']), Tree('X', ['are', 'frowning'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['The']), Tree('NNS', ['kids'])]), Tree('VP', [Tree('VBP', ['are']), Tree('VP', [Tree('VBG', ['frowning'])])])])])})
"NLIExample({'annotator_labels': ['contradiction'], 'captionID': '3691670743.jpg#0', 'gold_label': 'contradiction', 'pairID': '3691670743.jpg#0r1c', 'sentence1': 'A boy is jumping on skateboard in the middle of a red bridge.', 'sentence1_binary_parse': Tree('X', [Tree('X', ['A', 'boy']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['jumping', Tree('X', ['on', 'skateboard'])]), Tree('X', ['in', Tree('X', [Tree('X', ['the', 'middle']), Tree('X', ['of', Tree('X', ['a', Tree('X', ['red', 'bridge'])])])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['boy'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['jumping']), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('NN', ['skateboard'])])]), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('NP', [Tree('DT', ['the']), Tree('NN', ['middle'])]), Tree('PP', [Tree('IN', ['of']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['red']), Tree('NN', ['bridge'])])])])])])]), Tree('.', ['.'])])]), 'sentence2': 'The boy skates down the sidewalk.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['The', 'boy']), Tree('X', [Tree('X', [Tree('X', ['skates', 'down']), Tree('X', ['the', 'sidewalk'])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['The']), Tree('NN', ['boy'])]), Tree('VP', [Tree('VBZ', ['skates']), Tree('PRT', [Tree('RP', ['down'])]), Tree('NP', [Tree('DT', ['the']), Tree('NN', ['sidewalk'])])]), Tree('.', ['.'])])])})
"NLIExample({'annotator_labels': ['entailment'], 'captionID': '3691670743.jpg#0', 'gold_label': 'entailment', 'pairID': '3691670743.jpg#0r1e', 'sentence1': 'A boy is jumping on skateboard in the middle of a red bridge.', 'sentence1_binary_parse': Tree('X', [Tree('X', ['A', 'boy']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['jumping', Tree('X', ['on', 'skateboard'])]), Tree('X', ['in', Tree('X', [Tree('X', ['the', 'middle']), Tree('X', ['of', Tree('X', ['a', Tree('X', ['red', 'bridge'])])])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['boy'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['jumping']), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('NN', ['skateboard'])])]), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('NP', [Tree('DT', ['the']), Tree('NN', ['middle'])]), Tree('PP', [Tree('IN', ['of']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['red']), Tree('NN', ['bridge'])])])])])])]), Tree('.', ['.'])])]), 'sentence2': 'The boy does a skateboarding trick.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['The', 'boy']), Tree('X', [Tree('X', ['does', Tree('X', ['a', Tree('X', ['skateboarding', 'trick'])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['The']), Tree('NN', ['boy'])]), Tree('VP', [Tree('VBZ', ['does']), Tree('NP', [Tree('DT', ['a']), Tree('NNP', ['skateboarding']), Tree('NN', ['trick'])])]), Tree('.', ['.'])])])})
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3691670743.jpg#0', 'gold_label': 'neutral', 'pairID': '3691670743.jpg#0r1n', 'sentence1': 'A boy is jumping on skateboard in the middle of a red bridge.', 'sentence1_binary_parse': Tree('X', [Tree('X', ['A', 'boy']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['jumping', Tree('X', ['on', 'skateboard'])]), Tree('X', ['in', Tree('X', [Tree('X', ['the', 'middle']), Tree('X', ['of', Tree('X', ['a', Tree('X', ['red', 'bridge'])])])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['boy'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['jumping']), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('NN', ['skateboard'])])]), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('NP', [Tree('DT', ['the']), Tree('NN', ['middle'])]), Tree('PP', [Tree('IN', ['of']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['red']), Tree('NN', ['bridge'])])])])])])]), Tree('.', ['.'])])]), 'sentence2': 'The boy is wearing safety equipment.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['The', 'boy']), Tree('X', [Tree('X', ['is', Tree('X', ['wearing', Tree('X', ['safety', 'equipment'])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['The']), Tree('NN', ['boy'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['wearing']), Tree('NP', [Tree('NN', ['safety']), Tree('NN', ['equipment'])])])]), Tree('.', ['.'])])])})
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '4804607632.jpg#0', 'gold_label': 'neutral', 'pairID': '4804607632.jpg#0r1n', 'sentence1': 'An older man sits with his orange juice at a small table in a coffee shop while employees in bright colored shirts smile in the background.', 'sentence1_binary_parse': Tree('X', [Tree('X', ['An', Tree('X', ['older', 'man'])]), Tree('X', [Tree('X', [Tree('X', ['sits', Tree('X', ['with', Tree('X', [Tree('X', ['his', Tree('X', ['orange', 'juice'])]), Tree('X', ['at', Tree('X', [Tree('X', ['a', Tree('X', ['small', 'table'])]), Tree('X', ['in', Tree('X', ['a', Tree('X', ['coffee', 'shop'])])])])])])])]), Tree('X', ['while', Tree('X', [Tree('X', ['employees', Tree('X', ['in', Tree('X', ['bright', Tree('X', ['colored', 'shirts'])])])]), Tree('X', ['smile', Tree('X', ['in', Tree('X', ['the', 'background'])])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['An']), Tree('JJR', ['older']), Tree('NN', ['man'])]), Tree('VP', [Tree('VBZ', ['sits']), Tree('PP', [Tree('IN', ['with']), Tree('NP', [Tree('NP', [Tree('PRP$', ['his']), Tree('JJ', ['orange']), Tree('NN', ['juice'])]), Tree('PP', [Tree('IN', ['at']), Tree('NP', [Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['small']), Tree('NN', ['table'])]), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['coffee']), Tree('NN', ['shop'])])])])])])]), Tree('SBAR', [Tree('IN', ['while']), Tree('S', [Tree('NP', [Tree('NP', [Tree('NNS', ['employees'])]), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('JJ', ['bright']), Tree('JJ', ['colored']), Tree('NNS', ['shirts'])])])]), Tree('VP', [Tree('VBP', ['smile']), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['the']), Tree('NN', ['background'])])])])])])]), Tree('.', ['.'])])]), 'sentence2': 'An older man drinks his juice as he waits for his daughter to get off work.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['An', Tree('X', ['older', 'man'])]), Tree('X', [Tree('X', [Tree('X', ['drinks', Tree('X', ['his', 'juice'])]), Tree('X', ['as', Tree('X', ['he', Tree('X', ['waits', Tree('X', ['for', Tree('X', ['his', Tree('X', ['daughter', Tree('X', ['to', Tree('X', [Tree('X', ['get', 'off']), 'work'])])])])])])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['An']), Tree('JJR', ['older']), Tree('NN', ['man'])]), Tree('VP', [Tree('VBZ', ['drinks']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['juice'])]), Tree('SBAR', [Tree('IN', ['as']), Tree('S', [Tree('NP', [Tree('PRP', ['he'])]), Tree('VP', [Tree('VBZ', ['waits']), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['daughter']), Tree('S', [Tree('VP', [Tree('TO', ['to']), Tree('VP', [Tree('VB', ['get']), Tree('PRT', [Tree('RP', ['off'])]), Tree('NP', [Tree('NN', ['work'])])])])])])])])])])]), Tree('.', ['.'])])])})
"NLIExample({'annotator_labels': ['contradiction'], 'captionID': '4804607632.jpg#0', 'gold_label': 'contradiction', 'pairID': '4804607632.jpg#0r1c', 'sentence1': 'An older man sits with his orange juice at a small table in a coffee shop while employees in bright colored shirts smile in the background.', 'sentence1_binary_parse': Tree('X', [Tree('X', ['An', Tree('X', ['older', 'man'])]), Tree('X', [Tree('X', [Tree('X', ['sits', Tree('X', ['with', Tree('X', [Tree('X', ['his', Tree('X', ['orange', 'juice'])]), Tree('X', ['at', Tree('X', [Tree('X', ['a', Tree('X', ['small', 'table'])]), Tree('X', ['in', Tree('X', ['a', Tree('X', ['coffee', 'shop'])])])])])])])]), Tree('X', ['while', Tree('X', [Tree('X', ['employees', Tree('X', ['in', Tree('X', ['bright', Tree('X', ['colored', 'shirts'])])])]), Tree('X', ['smile', Tree('X', ['in', Tree('X', ['the', 'background'])])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['An']), Tree('JJR', ['older']), Tree('NN', ['man'])]), Tree('VP', [Tree('VBZ', ['sits']), Tree('PP', [Tree('IN', ['with']), Tree('NP', [Tree('NP', [Tree('PRP$', ['his']), Tree('JJ', ['orange']), Tree('NN', ['juice'])]), Tree('PP', [Tree('IN', ['at']), Tree('NP', [Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['small']), Tree('NN', ['table'])]), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['coffee']), Tree('NN', ['shop'])])])])])])]), Tree('SBAR', [Tree('IN', ['while']), Tree('S', [Tree('NP', [Tree('NP', [Tree('NNS', ['employees'])]), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('JJ', ['bright']), Tree('JJ', ['colored']), Tree('NNS', ['shirts'])])])]), Tree('VP', [Tree('VBP', ['smile']), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['the']), Tree('NN', ['background'])])])])])])]), Tree('.', ['.'])])]), 'sentence2': 'A boy flips a burger.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'boy']), Tree('X', [Tree('X', ['flips', Tree('X', ['a', 'burger'])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['boy'])]), Tree('VP', [Tree('VBZ', ['flips']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['burger'])])]), Tree('.', ['.'])])])})
"NLIExample({'annotator_labels': ['entailment', 'neutral', 'entailment', 'neutral', 'neutral'], 'captionID': '4804607632.jpg#0', 'gold_label': 'neutral', 'pairID': '4804607632.jpg#0r1e', 'sentence1': 'An older man sits with his orange juice at a small table in a coffee shop while employees in bright colored shirts smile in the background.', 'sentence1_binary_parse': Tree('X', [Tree('X', ['An', Tree('X', ['older', 'man'])]), Tree('X', [Tree('X', [Tree('X', ['sits', Tree('X', ['with', Tree('X', [Tree('X', ['his', Tree('X', ['orange', 'juice'])]), Tree('X', ['at', Tree('X', [Tree('X', ['a', Tree('X', ['small', 'table'])]), Tree('X', ['in', Tree('X', ['a', Tree('X', ['coffee', 'shop'])])])])])])])]), Tree('X', ['while', Tree('X', [Tree('X', ['employees', Tree('X', ['in', Tree('X', ['bright', Tree('X', ['colored', 'shirts'])])])]), Tree('X', ['smile', Tree('X', ['in', Tree('X', ['the', 'background'])])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['An']), Tree('JJR', ['older']), Tree('NN', ['man'])]), Tree('VP', [Tree('VBZ', ['sits']), Tree('PP', [Tree('IN', ['with']), Tree('NP', [Tree('NP', [Tree('PRP$', ['his']), Tree('JJ', ['orange']), Tree('NN', ['juice'])]), Tree('PP', [Tree('IN', ['at']), Tree('NP', [Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['small']), Tree('NN', ['table'])]), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['coffee']), Tree('NN', ['shop'])])])])])])]), Tree('SBAR', [Tree('IN', ['while']), Tree('S', [Tree('NP', [Tree('NP', [Tree('NNS', ['employees'])]), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('JJ', ['bright']), Tree('JJ', ['colored']), Tree('NNS', ['shirts'])])])]), Tree('VP', [Tree('VBP', ['smile']), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['the']), Tree('NN', ['background'])])])])])])]), Tree('.', ['.'])])]), 'sentence2': 'An elderly man sits in a small shop.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['An', Tree('X', ['elderly', 'man'])]), Tree('X', [Tree('X', ['sits', Tree('X', ['in', Tree('X', ['a', Tree('X', ['small', 'shop'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['An']), Tree('JJ', ['elderly']), Tree('NN', ['man'])]), Tree('VP', [Tree('VBZ', ['sits']), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['small']), Tree('NN', ['shop'])])])]), Tree('.', ['.'])])])})
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
mismatched_ann = nli.read_annotated_subset(mismatched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann, 30)
view_random_example(mismatched_ann,10)
###Output
pairID: 141539c
['#MODAL', '#NEGATION', '#LONG_SENTENCE', '#WORD_OVERLAP']
You will learn later that the person who usually poured out Mrs. Inglethorp's medicine was always extremely careful not to shake the bottle, but to leave the sediment at the bottom of it undisturbed.
contradiction
The person pouring Mrs. Inglethorp's medicine was always very careful to shake the bottle.
pairID: 11003c
[]
Um she used to, she used to walk every day.
contradiction
She hardly ever walked.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2021"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://www.aclweb.org/anthology/W17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann, random_state=23)
###Output
pairID: 132936n
['#NEGATION', '#COREF']
This one-at-a-time, uncoordinated series of regulatory requirements for the power industry is not the optimal approach for the environment, the power generation sector, or American consumers.
entailment
It is not the optimal approach.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://www.aclweb.org/anthology/W17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
a = nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
a
nli_ex = next(a.read())
nli_ex
nli_ex.sentence1
nli_ex.sentence1_binary_parse
nli_ex.sentence1_parse
nli_ex.gold_label
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: Task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2019"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [NLI model landscape](NLI-model-landscape)1. [Set-up](Set-up)1. [Properties of the corpora](Properties-of-the-corpora) 1. [SNLI properties](SNLI-properties) 1. [MultiNLI properties](MultiNLI-properties)1. [Working with SNLI and MultiNLI](Working-with-SNLI-and-MultiNLI) 1. [Readers](Readers) 1. [The NLIExample class](The-NLIExample-class) 1. [Labels](Labels) 1. [Tree representations](Tree-representations)1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://link.springer.com/chapter/10.1007%2F11736790_9), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||---------|---------------|------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](nli_wordentail_bakeoff.ipynb), we looked at a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on two large, human-labeled, relatively naturalistic entailment corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). They have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. NLI model landscape Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.zip), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
###Output
_____no_output_____
###Markdown
Properties of the corporaFor both SNLI and MultiNLI, MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. SNLI properties * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa * Top scores currently around 89%. MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides * Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label * Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with SNLI and MultiNLI ReadersThe following readers should make it easy to work with these corpora: * `nli.SNLITrainReader`* `nli.SNLIDevReader`* `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The base class is `nli.NLIReader`, which should be easy to use to define additional readers.If you did change `data_home`, `snli_home`, or `multinli_home` above, then you'll need to call these readers with `dirname` as an argument, where `dirname` is your `snli_home` or `multinli_home`, as appropriate.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes allow this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we trade efficiency for precision in the number of cases we return; see the implementation for details.) The NLIExample classAll of the readers have a `read` method that yields `NLIExample` example instances, which have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree`
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
snli_ex
###Output
_____no_output_____
###Markdown
Labels
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Tree representations Both corpora contain __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are a tokenized version of the example:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations):
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 2367n
['#LONG_SENTENCE']
On the window above the sink a small container is stuffed with bits of leftovers--the red berries of barberry, small twigs of willow, cuttings of hinoki cypress with its fruits attached, and the pendulous leathery seed pods of wisteria.
neutral
There is a small jar on the window.
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://www.aclweb.org/anthology/W17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2021"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://www.aclweb.org/anthology/W17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2021"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [Set-up](Set-up)1. [SNLI](SNLI) 1. [SNLI properties](SNLI-properties) 1. [Working with SNLI](Working-with-SNLI)1. [MultiNLI](MultiNLI) 1. [MultiNLI properties](MultiNLI-properties) 1. [Working with MultiNLI](Working-with-MultiNLI) 1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Adversarial NLI](Adversarial-NLI) 1. [Adversarial NLI properties](Adversarial-NLI-properties) 1. [Working with Adversarial NLI](Working-with-Adversarial-NLI)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||:--------|:---------------:|:------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](hw_wordentail.ipynb), we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on three NLI corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
ANLI_HOME = os.path.join(DATA_HOME, "anli_v1.0")
###Output
_____no_output_____
###Markdown
SNLI SNLI properties For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://www.aclweb.org/anthology/W17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa* Top scores currently around 90%. Working with SNLI The following readers should make it easy to work with SNLI: * `nli.SNLITrainReader`* `nli.SNLIDevReader`Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.The base class, `nli.NLIReader`, is used by all the readers discussed here.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.) All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The following creates the label distribution for the training data:
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(
SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`. Let's look at a specific example in some detail:
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
###Output
"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})
###Markdown
As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are tokenized versions of them:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
MultiNLI MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides* Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label* Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with MultiNLI For MultiNLI, we have the following readers: * `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation)). The interface to these is the same as for the SNLI readers:
###Code
nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree` The full label distribution:
###Code
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(
MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions. Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann, random_state=23)
###Output
pairID: 132936n
['#NEGATION', '#COREF']
This one-at-a-time, uncoordinated series of regulatory requirements for the power industry is not the optimal approach for the environment, the power generation sector, or American consumers.
entailment
It is not the optimal approach.
###Markdown
Adversarial NLI Adversarial NLI propertiesThe ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).1. The crowdworker submits a hypothesis text.1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:| Round | Model | Training data | Context sources | |:------:|:------------|:---------------------------|:-----------------|| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia || 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia || 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 2 | Various |Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation. Working with Adversarial NLI For ANLI, we have the following readers: * `nli.ANLITrainReader`* `nli.ANLIDevReader`As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development. Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:
###Code
for rounds in ((1,), (2,), (3,), (1,2,3)):
count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))
print("R{0:}: {1:,}".format(rounds, count))
###Output
R(1,): 16,946
R(2,): 45,460
R(3,): 100,459
R(1, 2, 3): 162,865
###Markdown
The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total). ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI * __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI* __model_label__: the label predicted by the model used in the current round* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.* __genre__: the source for the __context__ text* __tag__: information about the round and train/dev/test classificationAll these attribute are `str`-valued except for `emturk`, which is `bool`-valued. The labels in this dataset are conceptually the same as for `SNLI/MultiNLI`, but they are encoded differently:
###Code
anli_labels = pd.Series(
[ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])
anli_labels.value_counts()
###Output
_____no_output_____
###Markdown
For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:
###Code
pd.Series(
[ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]
).value_counts()
###Output
_____no_output_____
###Markdown
In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:
###Code
for r in (1,2,3):
dist = pd.Series(
[ex.label == ex.model_label
for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]
).value_counts()
dist = dist / dist.sum()
dist.name = "Round {}".format(r)
print(dist, end="\n\n")
###Output
True 0.821197
False 0.178803
Name: Round 1, dtype: float64
True 0.932028
False 0.067972
Name: Round 2, dtype: float64
True 0.915916
False 0.084084
Name: Round 3, dtype: float64
###Markdown
Natural language inference: task and datasets
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Our version of the task](Our-version-of-the-task)1. [Primary resources](Primary-resources)1. [NLI model landscape](NLI-model-landscape)1. [Set-up](Set-up)1. [Properties of the corpora](Properties-of-the-corpora) 1. [SNLI properties](SNLI-properties) 1. [MultiNLI properties](MultiNLI-properties)1. [Working with SNLI and MultiNLI](Working-with-SNLI-and-MultiNLI) 1. [Readers](Readers) 1. [The NLIExample class](The-NLIExample-class) 1. [Labels](Labels) 1. [Tree representations](Tree-representations)1. [Annotated MultiNLI subsets](Annotated-MultiNLI-subsets)1. [Other NLI datasets](Other-NLI-datasets) OverviewNatural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.[Dagan et al. (2006)](https://link.springer.com/chapter/10.1007%2F11736790_9), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications. Our version of the taskOur NLI data will look like this:| Premise | Relation | Hypothesis ||---------|---------------|------------|| turtle | contradiction | linguist || A turtled danced | entails | A turtle moved || Every reptile danced | entails | Every turtle moved || Some turtles walk | contradicts | No turtles move || James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |In the [word-entailment bakeoff](nli_wordentail_bakeoff.ipynb), we looked at a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully. Primary resourcesWe're going to focus on two large, human-labeled, relatively naturalistic entailment corpora:* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). They have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI. NLI model landscape Set-up* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.zip), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)
###Code
import nli
import os
import pandas as pd
import random
DATA_HOME = os.path.join("data", "nlidata")
SNLI_HOME = os.path.join(DATA_HOME, "snli_1.0")
MULTINLI_HOME = os.path.join(DATA_HOME, "multinli_1.0")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
###Output
_____no_output_____
###Markdown
Properties of the corporaFor both SNLI and MultiNLI, MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators. SNLI properties * All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).* 550,152 train examples; 10K dev; 10K test* Mean length in tokens: * Premise: 14.1 * Hypothesis: 8.3* Clause-types * Premise S-rooted: 74% * Hypothesis S-rooted: 88.9%* Vocab size: 37,026* 56,951 examples validated by four additional annotators * 58.3% examples with unanimous gold label * 91.2% of gold labels match the author's label * 0.70 overall Fleiss kappa * Top scores currently around 89%. MultiNLI properties* Train premises drawn from five genres: 1. Fiction: works from 1912–2010 spanning many genres 1. Government: reports, letters, speeches, etc., from government websites 1. The _Slate_ website 1. Telephone: the Switchboard corpus 1. Travel: Berlitz travel guides * Additional genres just for dev and test (the __mismatched__ condition): 1. The 9/11 report 1. Face-to-face: The Charlotte Narrative and Conversation Collection 1. Fundraising letters 1. Non-fiction from Oxford University Press 1. _Verbatim_ articles about linguistics* 392,702 train examples; 20K dev; 20K test* 19,647 examples validated by four additional annotators * 58.2% examples with unanimous gold label * 92.6% of gold labels match the author's label * Test-set labels available as a Kaggle competition. * Top matched scores currently around 0.81. * Top mismatched scores currently around 0.83. Working with SNLI and MultiNLI ReadersThe following readers should make it easy to work with these corpora: * `nli.SNLITrainReader`* `nli.SNLIDevReader`* `nli.MultiNLITrainReader`* `nli.MultiNLIMatchedDevReader`* `nli.MultiNLIMismatchedDevReader`The base class is `nli.NLIReader`, which should be easy to use to define additional readers.If you did change `data_home`, `snli_home`, or `multinli_home` above, then you'll need to call these readers with `dirname` as an argument, where `dirname` is your `snli_home` or `multinli_home`, as appropriate.Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes allow this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:
###Code
nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we trade efficiency for precision in the number of cases we return; see the implementation for details.) The NLIExample classAll of the readers have a `read` method that yields `NLIExample` example instances, which have the following attributes:* __annotator_labels__: `list of str`* __captionID__: `str`* __gold_label__: `str`* __pairID__: `str`* __sentence1__: `str`* __sentence1_binary_parse__: `nltk.tree.Tree`* __sentence1_parse__: `nltk.tree.Tree`* __sentence2__: `str`* __sentence2_binary_parse__: `nltk.tree.Tree`* __sentence2_parse__: `nltk.tree.Tree`
###Code
snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())
snli_ex = next(snli_iterator)
print(snli_ex)
snli_ex
###Output
_____no_output_____
###Markdown
Labels
###Code
snli_labels = pd.Series(
[ex.gold_label for ex in nli.SNLITrainReader(SNLI_HOME, filter_unlabeled=False).read()])
snli_labels.value_counts()
multinli_labels = pd.Series(
[ex.gold_label for ex in nli.MultiNLITrainReader(MULTINLI_HOME, filter_unlabeled=False).read()])
multinli_labels.value_counts()
###Output
_____no_output_____
###Markdown
Tree representations Both corpora contain __three versions__ of the premise and hypothesis sentences:1. Regular string representations of the data1. Unlabeled binary parses 1. Labeled parses
###Code
snli_ex.sentence1
###Output
_____no_output_____
###Markdown
The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:
###Code
snli_ex.sentence1_binary_parse
###Output
_____no_output_____
###Markdown
Here's the full parse tree with syntactic categories:
###Code
snli_ex.sentence1_parse
###Output
_____no_output_____
###Markdown
The leaves of either tree are a tokenized version of the example:
###Code
snli_ex.sentence1_parse.leaves()
###Output
_____no_output_____
###Markdown
Annotated MultiNLI subsetsMultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
###Code
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.sentence1)
print(ex.gold_label)
print(ex.sentence2)
matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)
view_random_example(matched_ann)
###Output
pairID: 63218c
[]
Recently, however, I have settled down and become decidedly less experimental.
contradiction
I am still as experimental as ever, and I am always on the move.
|
GetStarted/08_masking.ipynb | ###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
view_state = pdk.ViewState(longitude=-122.2531, latitude=37.6295, zoom=9)
ee_layers.append(EarthEngineLayer(ee_object=maskedDifference, vis_params=vizParams))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
Map.setCenter(-122.2531, 37.6295, 9)
Map.addLayer(maskedDifference, vizParams, 'NDVI difference')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
Map.setCenter(-122.2531, 37.6295, 9)
Map.addLayer(maskedDifference, vizParams, 'NDVI difference')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
Map.setCenter(-122.2531, 37.6295, 9)
Map.addLayer(maskedDifference, vizParams, 'NDVI difference')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
Map.setCenter(-122.2531, 37.6295, 9)
Map.addLayer(maskedDifference, vizParams, 'NDVI difference')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
Map.setCenter(-122.2531, 37.6295, 9)
Map.addLayer(maskedDifference, vizParams, 'NDVI difference')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
Map.setCenter(-122.2531, 37.6295, 9)
Map.addLayer(maskedDifference, vizParams, 'NDVI difference')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
Map.setCenter(-122.2531, 37.6295, 9)
Map.addLayer(maskedDifference, vizParams, 'NDVI difference')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
notebooks/Working_with_Imbalanced_Data.ipynb | ###Markdown
Imbalanced Data PreprocessingStrategies for balancing highly imbalanced datasets:* Oversample - Oversample the minority class to balance the dataset - Can create synthetic data based on the minority class* Undersample - Remove majority class data (not preferred)* Weight Classes - Use class weights to make minority class data more prominent Let's use the red wine dataset to start with to demonstrate a highly imbalanced data set with very few high and low quality wine ratings.
###Code
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv', sep=';')
df.quality.value_counts()
###Output
_____no_output_____
###Markdown
Set the features to use for our prediction
###Code
features = df[['volatile acidity', 'citric acid', 'sulphates', 'alcohol']]
#features = df.drop(columns='quality')
###Output
_____no_output_____
###Markdown
Set the target value for our prediction
###Code
target = df['quality']
###Output
_____no_output_____
###Markdown
Split the dataset into a training and test dataset.
###Code
xtrain, xtest, ytrain, ytrue = train_test_split(features, target)
###Output
_____no_output_____
###Markdown
Visualize the imbalanced nature of the training data set outcomes.
###Code
count = ytrue.value_counts()
count.plot.bar()
plt.ylabel('Number of records')
plt.xlabel('Target Class')
plt.show()
###Output
_____no_output_____
###Markdown
Base Model - Imbalanced DataUsing a simple Decision Tree Classifier to demonstrate the changes in prediction quality based on using different techniques to deal with imbalanced data.
###Code
model = DecisionTreeClassifier()
model.fit(xtrain, ytrain)
y_pred = model.predict(xtest)
print(f'Accuracy Score: {metrics.accuracy_score(ytrue, y_pred)}')
print(f'Precision Score: {metrics.precision_score(ytrue, y_pred, average="macro")}')
print(f'Recall Score: {metrics.recall_score(ytrue, y_pred, average="macro")}')
print(f'F1 Score: {metrics.f1_score(ytrue, y_pred, average="macro")}')
###Output
_____no_output_____
###Markdown
OversamplingUsing the Imbalanced-Learn module, which is built on top of scikit learn, there are a number of options for oversampling (and undersampling) your training data. The most basic is the `RandomOverSampler()` function, which has a couple of different options:* `'auto'` (default: `'not majority'`)* `'minority'`* `'not majority'`* `'not minority'`* `'all'`There are also a host of other possibilities to create synthetic data (e.g., SMOTE)https://imbalanced-learn.org/stable/over_sampling.html
###Code
ros = RandomOverSampler()
X_resampled, y_resampled = ros.fit_resample(xtrain, ytrain)
value_counts = np.unique(y_resampled, return_counts=True)
for val, count in zip(value_counts[0], value_counts[1]):
print(val, count)
###Output
_____no_output_____
###Markdown
Let's look at the resampled data to confirm that we now have a balanced dataset.
###Code
plt.bar(value_counts[0], value_counts[1])
count.plot.bar()
plt.ylabel('Number of records')
plt.xlabel('Target Class')
plt.show()
###Output
_____no_output_____
###Markdown
Now let's try our prediction with the oversampled data
###Code
model = DecisionTreeClassifier()
model.fit(X_resampled, y_resampled)
y_pred = model.predict(xtest)
print(f'Accuracy Score: {metrics.accuracy_score(ytrue, y_pred)}')
print(f'Precision Score: {metrics.precision_score(ytrue, y_pred, average="macro")}')
print(f'Recall Score: {metrics.recall_score(ytrue, y_pred, average="macro")}')
print(f'F1 Score: {metrics.f1_score(ytrue, y_pred, average="macro")}')
###Output
_____no_output_____
###Markdown
So from this, we were able to improve the accuracy, precision, and recall of our model! WeightingDetermining weights are a balance of different factors and partially affected by the size of the imbalance. Scikit Learn has a function to help compute weights to get balanced classes caleed `compute_class_weights` frim the `class_weight` portion of the module.To get the balanced weights use:`class_weights = ‘balanced’`and the model automatically assigns the class weights inversely proportional to their respective frequencies.If the classes are too imbalanced, you might find better success by assigning weights to each class using a dictionary.
###Code
classes = np.unique(ytrain)
cw = class_weight.compute_class_weight('balanced', classes=classes, y=ytrain)
weights = dict(zip(classes, cw))
print(weights)
###Output
_____no_output_____
###Markdown
Now let's use our Decision Tree Model with the class weights calculated above.
###Code
model = DecisionTreeClassifier(class_weight=weights)
model.fit(xtrain, ytrain)
y_pred = model.predict(xtest)
print(f'Accuracy Score: {metrics.accuracy_score(ytrue, y_pred)}')
print(f'Precision Score: {metrics.precision_score(ytrue, y_pred, average="macro")}')
print(f'Recall Score: {metrics.recall_score(ytrue, y_pred, average="macro")}')
print(f'F1 Score: {metrics.f1_score(ytrue, y_pred, average="macro")}')
###Output
_____no_output_____
###Markdown
So improved over our initial model, but not as much as the oversampled model in this case. Credit Card Fraud - Logistic Regression
###Code
# load the data set
data = pd.read_csv('http://bergeron.valpo.edu/creditcard.csv')
# normalise the amount column
data['normAmount'] = StandardScaler().fit_transform(np.array(data['Amount']).reshape(-1, 1))
# drop Time and Amount columns as they are not relevant for prediction purpose
data = data.drop(['Time', 'Amount'], axis = 1)
# as you can see there are 492 fraud transactions.
print(data['Class'].value_counts())
plt.figure(figsize=(8, 8))
plt.bar([0, 1], data['Class'].value_counts(), tick_label=['Not Fraud', 'Fraud'])
plt.text(0, 286000, data['Class'].value_counts()[0], ha='center', fontsize=16)
plt.text(1, 10000, data['Class'].value_counts()[1], ha='center', fontsize=16)
plt.show()
X = data.drop(columns=['Class'])
y = data['Class']
# split into 70:30 ration
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0)
# describes info about train and test set
print("Number transactions X_train dataset: ", X_train.shape)
print("Number transactions y_train dataset: ", y_train.shape)
print("Number transactions X_test dataset: ", X_test.shape)
print("Number transactions y_test dataset: ", y_test.shape)
###Output
_____no_output_____
###Markdown
Base Model - Imbalanced Data
###Code
# logistic regression object
lr = LogisticRegression()
# train the model on train set
lr.fit(X_train, y_train.ravel())
predictions = lr.predict(X_test)
# print classification report
print(metrics.classification_report(y_test, predictions))
###Output
_____no_output_____
###Markdown
So our prediction leaves a lot to be desired as we have a very low recall of the fraud cases.Let's try our hand at creating some synthetic data for resampling the minority class using SMOTE (Synthetic Minority Oversampling Technique)
###Code
sm = SMOTE(sampling_strategy='minority', random_state = 2)
X_train_res, y_train_res = sm.fit_resample(X_train, y_train)
lr1 = LogisticRegression()
lr1.fit(X_train_res, y_train_res)
predictions = lr1.predict(X_test)
# print classification report
print(metrics.classification_report(y_test, predictions))
###Output
_____no_output_____
###Markdown
Our model's recall of fraud cases has improved greatly from our original model and our non-fraud recall has not suffered much at all.We can also use a different threshold for predicting the fraud case. Instead of the standard >0.5 threshold, we could set 0.6 or 0.7 to improve the precision without harming the recall too much.
###Code
predictions = (lr1.predict_proba(X_test)[:,1]>=0.7).astype(int)
# print classification report
print(metrics.classification_report(y_test, predictions))
###Output
_____no_output_____ |
Workspaces/Using an Already existing Workspace.ipynb | ###Markdown
Using an already existing Workspace in Azure ML by `Mr. Harshit Dawar!`
###Code
# Importing the workspaces class
from azureml.core import Workspace
# Mentioning all the important stuff, replace the placeholders by the values that are required.
ws = Workspace.get(name = "<Workspace Name>",
resource_group = "<Resource Group Name>",
subscription_id = "<Subscription ID>")
# Writing the config File
ws.write_config(path = "./", file_name = "DP-100.json")
###Output
_____no_output_____ |
docs/_downloads/35ec506a78f5361e3a8f74f5b1cdfc8d/plot__functions.ipynb | ###Markdown
FunctionsSome plots visualize a transformation of the original data set. Use astat parameter to choose a common transformation to visualize.Each stat creates additional variables to map aesthetics to. Thesevariables use a common ..name.. syntax.Look at the examples below.
###Code
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')
p = ggplot(df, aes('cty', 'hwy')) + geom_point()
p1 = p + geom_smooth() + ggtitle('geom="smooth" + default stat')
p2 = p + geom_line(stat='smooth', color='magenta', size=1) + ggtitle('geom="line" + stat="smooth"')
w, h = 400, 300
bunch = GGBunch()
bunch.add_plot(p1, 0, 0, w, h)
bunch.add_plot(p2, w, 0, w, h)
bunch
###Output
_____no_output_____ |
sandpit/merge_atlas_nc.ipynb | ###Markdown
Merge the stratification and SSH atlas files into one
###Code
from iwatlas import sshdriver
import xarray as xr
import pandas as pd
import numpy as np
from datetime import datetime
basedir = '/home/suntans/cloudstor/Data/IWAtlas'
# climfile = '{}/NWS_2km_GLORYS_hex_2013_2014_Climatology.nc'.format(basedir)
N2file = '{}/NWS_2km_GLORYS_hex_2013_2014_Stratification_Atlas_v2.1.nc'.format(basedir)
sshfile = '{}/NWS_2km_GLORYS_hex_2013_2014_SSHBC_Harmonics.nc'.format(basedir)
outfile = '/home/suntans/cloudstor/Data/IWAtlas-lite/NWS_2km_GLORYS_hex_2013_2014_InternalWave_Atlas.nc'
ssh = xr.open_dataset(sshfile)
N2 = xr.open_dataset(N2file)
N2 = N2.rename_dims({'Ntide':'Nannual'})
N2 = N2.rename_vars({'omega':'omegaA'})
# Drop a few variables
N2 = N2.drop(labels=['N2_t','N2_err','time'])
N2
new_ds = N2.merge(ssh)
new_ds
new_ds.attrs['Created'] = str(datetime.now())
new_ds.attrs['Description'] = 'Internal wave and density stratification climatology file'
new_ds.attrs['Author'] = 'Matt Rayson (matt.rayson@uwa.edu.au)'
new_ds.attrs['Number_Annual_Harmonics'] = ssh.attrs['Number_Annual_Harmonics']
new_ds
compflags = {'zlib':True, 'complevel':5}
encoding = {}
outvars= ['N2_mu', 'N2_re','N2_im','SSH_BC_var','SSH_BC_aa','SSH_BC_Aa', 'SSH_BC_Ba']
for vv in outvars:
encoding.update({vv:compflags})
encoding
new_ds.to_netcdf(outfile, encoding=encoding)
###Output
_____no_output_____ |
mountain_car.ipynb | ###Markdown
https://pythonprogramming.net/q-learning-reinforcement-learning-python-tutorial/https://en.wikipedia.org/wiki/Q-learning
###Code
# objective is to get the cart to the flag.
# for now, let's just move randomly:
import gym
import numpy as np
env = gym.make("MountainCar-v0")
LEARNING_RATE = 0.1
DISCOUNT = 0.95sla
EPISODES = 25000
SHOW_EVERY = 3000
DISCRETE_OS_SIZE = [20, 20]
discrete_os_win_size = (env.observation_space.high - env.observation_space.low)/DISCRETE_OS_SIZE
# Exploration settings
epsilon = 1 # not a constant, qoing to be decayed
START_EPSILON_DECAYING = 1
END_EPSILON_DECAYING = EPISODES//2
epsilon_decay_value = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING)
q_table = np.random.uniform(low=-2, high=0, size=(DISCRETE_OS_SIZE + [env.action_space.n]))
def get_discrete_state(state):
discrete_state = (state - env.observation_space.low)/discrete_os_win_size
return tuple(discrete_state.astype(np.int)) # we use this tuple to look up the 3 Q values for the available actions in the q-table
for episode in range(EPISODES):
discrete_state = get_discrete_state(env.reset())
done = False
if episode % SHOW_EVERY == 0:
render = True
print(episode)
else:
render = False
while not done:
if np.random.random() > epsilon:
# Get action from Q table
action = np.argmax(q_table[discrete_state])
else:
# Get random action
action = np.random.randint(0, env.action_space.n)
new_state, reward, done, _ = env.step(action)
new_discrete_state = get_discrete_state(new_state)
if episode % SHOW_EVERY == 0:
env.render()
#new_q = (1 - LEARNING_RATE) * current_q + LEARNING_RATE * (reward + DISCOUNT * max_future_q)
# If simulation did not end yet after last step - update Q table
if not done:
# Maximum possible Q value in next step (for new state)
max_future_q = np.max(q_table[new_discrete_state])
# Current Q value (for current state and performed action)
current_q = q_table[discrete_state + (action,)]
# And here's our equation for a new Q value for current state and action
new_q = (1 - LEARNING_RATE) * current_q + LEARNING_RATE * (reward + DISCOUNT * max_future_q)
# Update Q table with new Q value
q_table[discrete_state + (action,)] = new_q
# Simulation ended (for any reson) - if goal position is achived - update Q value with reward directly
elif new_state[0] >= env.goal_position:
#q_table[discrete_state + (action,)] = reward
q_table[discrete_state + (action,)] = 0
discrete_state = new_discrete_state
# Decaying is being done every episode if episode number is within decaying range
if END_EPSILON_DECAYING >= episode >= START_EPSILON_DECAYING:
epsilon -= epsilon_decay_value
env.close()
###Output
0
###Markdown
The agent finds a way to get on top of the hill but it isn't very stable for a while and near the end we see that the agent seems to have reach the top of the mountain for most episodes.
###Code
agent2=AgentMountainCar(env=env)
n_episodes=400
batch_size=256
gamma = 0.95
lr = 0.001
decay = 0.99
agent2.init_hyperparameters(n_episodes, batch_size, gamma, lr, decay)
agent2.train()
plot_agent_rewards(agent2)
###Output
_____no_output_____
###Markdown
Here we changed the discount factor and it seems a lot less stable than the previous one. It looks like there is progressively better rewards until around episode 200 where it finds the top of the hill but doesn't seem to be very consistent later on. This could be due to Discount Factor being too low and not allowing proper update of value function. The number of episodes was also increased to give more time to learn. The time is much longer than it should be and that is because the computer spent some time sleeping between the episodes.
###Code
agent3=AgentMountainCar(env=env)
n_episodes=400
batch_size=128
gamma = 0.995
lr = 0.0001
decay = 0.99
agent3.init_hyperparameters(n_episodes, batch_size, gamma, lr, decay)
agent3.train()
plot_agent_rewards(agent3)
###Output
_____no_output_____ |
Hyperparameter Tuning, Regularization and Optimization/Week 1/Regularization.ipynb | ###Markdown
RegularizationWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!**You will learn to:** Use regularization in your deep learning models.Let's get started! Table of Contents- [1 - Packages](1)- [2 - Problem Statement](2)- [3 - Loading the Dataset](3)- [4 - Non-Regularized Model](4)- [5 - L2 Regularization](5) - [Exercise 1 - compute_cost_with_regularization](ex-1) - [Exercise 2 - backward_propagation_with_regularization](ex-2)- [6 - Dropout](6) - [6.1 - Forward Propagation with Dropout](6-1) - [Exercise 3 - forward_propagation_with_dropout](ex-3) - [6.2 - Backward Propagation with Dropout](6-2) - [Exercise 4 - backward_propagation_with_dropout](ex-4)- [7 - Conclusions](7) 1 - Packages
###Code
# import packages
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
import scipy.io
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
from testCases import *
from public_tests import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
2 - Problem Statement You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. Figure 1: Football field. The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head They give you the following 2D dataset from France's past 10 games. 3 - Loading the Dataset
###Code
train_X, train_Y, test_X, test_Y = load_2D_dataset()
###Output
_____no_output_____
###Markdown
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.- If the dot is blue, it means the French player managed to hit the ball with his/her head- If the dot is red, it means the other team's player hit the ball with their head**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball. **Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. 4 - Non-Regularized ModelYou will use the following neural network (already implemented for you below). This model can be used:- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python. - in *dropout mode* -- by setting the `keep_prob` to a value less than oneYou will first try the model without any regularization. Then, you will implement:- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
###Code
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert (lambd == 0 or keep_prob == 1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
###Code
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6557412523481002
Cost after iteration 10000: 0.16329987525724204
Cost after iteration 20000: 0.13851642423234922
###Markdown
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
###Code
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting. 5 - L2 RegularizationThe standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$To:$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$Let's modify your cost and observe the consequences. Exercise 1 - compute_cost_with_regularizationImplement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :```pythonnp.sum(np.square(Wl))```Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
###Code
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
#(≈ 1 lines of code)
# L2_regularization_cost =
# YOUR CODE STARTS HERE
L2_regularization_cost = (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3))) * (lambd/2) *(1/m)
# YOUR CODE ENDS HERE
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, t_Y, parameters = compute_cost_with_regularization_test_case()
cost = compute_cost_with_regularization(A3, t_Y, parameters, lambd=0.1)
print("cost = " + str(cost))
compute_cost_with_regularization_test(compute_cost_with_regularization)
###Output
cost = 1.7864859451590758
[92m All tests passed.
###Markdown
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. Exercise 2 - backward_propagation_with_regularizationImplement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
###Code
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
#(≈ 1 lines of code)
# dW3 = 1./m * np.dot(dZ3, A2.T) + None
# YOUR CODE STARTS HERE
dW3 = (1/m) *(np.dot(dZ3, A2.T)) + (lambd/m)*W3
# YOUR CODE ENDS HERE
db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
#(≈ 1 lines of code)
# dW2 = 1./m * np.dot(dZ2, A1.T) + None
# YOUR CODE STARTS HERE
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
# YOUR CODE ENDS HERE
db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
#(≈ 1 lines of code)
# dW1 = 1./m * np.dot(dZ1, X.T) + None
# YOUR CODE STARTS HERE
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
# YOUR CODE ENDS HERE
db1 = 1. / m * np.sum(dZ1, axis=1, keepdims=True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
t_X, t_Y, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(t_X, t_Y, cache, lambd = 0.7)
print ("dW1 = \n"+ str(grads["dW1"]))
print ("dW2 = \n"+ str(grads["dW2"]))
print ("dW3 = \n"+ str(grads["dW3"]))
backward_propagation_with_regularization_test(backward_propagation_with_regularization)
###Output
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
[92m All tests passed.
###Markdown
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call: - `compute_cost_with_regularization` instead of `compute_cost`- `backward_propagation_with_regularization` instead of `backward_propagation`
###Code
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6974484493131264
Cost after iteration 10000: 0.2684918873282238
Cost after iteration 20000: 0.26809163371273004
###Markdown
Congrats, the test set accuracy increased to 93%. You have saved the French football team!You are not overfitting the training data anymore. Let's plot the decision boundary.
###Code
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.**What is L2-regularization actually doing?**:L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. **What you should remember:** the implications of L2-regularization on:- The cost computation: - A regularization term is added to the cost.- The backpropagation function: - There are extra terms in the gradients with respect to weight matrices.- Weights end up smaller ("weight decay"): - Weights are pushed to smaller values. 6 - DropoutFinally, **dropout** is a widely used regularization technique that is specific to deep learning. **It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!<!--To understand drop-out, consider this conversation with a friend:- Friend: "Why do you need all these neurons to train your network and classify images?". - You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."!--> Figure 2 : Drop-out on the second hidden layer. At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. Figure 3: Drop-out on the first and third hidden layers. $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. 6.1 - Forward Propagation with Dropout Exercise 3 - forward_propagation_with_dropoutImplement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. **Instructions**:You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.2. Set each entry of $D^{[1]}$ to be 1 with probability (`keep_prob`), and 0 otherwise.**Hint:** Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1 and about 20% are 0.This python statement: `X = (X < keep_prob).astype(int)` is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :```for i,v in enumerate(x): if v < keep_prob: x[i] = 1 else: v >= keep_prob x[i] = 0```Note that the `X = (X < keep_prob).astype(int)` works with multi-dimensional arrays, and the resulting output preserves the dimensions of the input array.Also note that without using `.astype(int)`, the result is an array of booleans `True` and `False`, which Python automatically converts to 1 and 0 if we multiply it with numbers. (However, it's better practice to convert data into the data type that we intend, so try using `.astype(int)`.)3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
###Code
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
#(≈ 4 lines of code) # Steps 1-4 below correspond to the Steps 1-4 described above.
# D1 = # Step 1: initialize matrix D1 = np.random.rand(..., ...)
# D1 = # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
# A1 = # Step 3: shut down some neurons of A1
# A1 = # Step 4: scale the value of neurons that haven't been shut down
# YOUR CODE STARTS HERE
D1 = np.random.rand(A1.shape[0], A1.shape[1])
D1 = D1 < keep_prob
A1 = A1 * D1
A1 = A1 / keep_prob
# YOUR CODE ENDS HERE
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
#(≈ 4 lines of code)
# D2 = # Step 1: initialize matrix D2 = np.random.rand(..., ...)
# D2 = # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
# A2 = # Step 3: shut down some neurons of A2
# A2 = # Step 4: scale the value of neurons that haven't been shut down
# YOUR CODE STARTS HERE
D2 = np.random.rand(A2.shape[0], A2.shape[1])
D2 = D2 < keep_prob
A2 = A2 * D2
A2 = A2 / keep_prob
# YOUR CODE ENDS HERE
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
t_X, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(t_X, parameters, keep_prob=0.7)
print ("A3 = " + str(A3))
forward_propagation_with_dropout_test(forward_propagation_with_dropout)
###Output
A3 = [[0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
[92m All tests passed.
###Markdown
6.2 - Backward Propagation with Dropout Exercise 4 - backward_propagation_with_dropoutImplement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. **Instruction**:Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. 2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
###Code
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims=True)
dA2 = np.dot(W3.T, dZ3)
#(≈ 2 lines of code)
# dA2 = # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
# dA2 = # Step 2: Scale the value of neurons that haven't been shut down
# YOUR CODE STARTS HERE
dA2 = dA2 * D2
dA2 = dA2 / keep_prob
# YOUR CODE ENDS HERE
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims=True)
dA1 = np.dot(W2.T, dZ2)
#(≈ 2 lines of code)
# dA1 = # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
# dA1 = # Step 2: Scale the value of neurons that haven't been shut down
# YOUR CODE STARTS HERE
dA1 = dA1 * D1
dA1 = dA1 / keep_prob
# YOUR CODE ENDS HERE
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims=True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
t_X, t_Y, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(t_X, t_Y, cache, keep_prob=0.8)
print ("dA1 = \n" + str(gradients["dA1"]))
print ("dA2 = \n" + str(gradients["dA2"]))
backward_propagation_with_dropout_test(backward_propagation_with_dropout)
###Output
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
[92m All tests passed.
###Markdown
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:- `forward_propagation_with_dropout` instead of `forward_propagation`.- `backward_propagation_with_dropout` instead of `backward_propagation`.
###Code
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6543912405149825
Cost after iteration 10000: 0.0610169865749056
Cost after iteration 20000: 0.060582435798513114
###Markdown
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! Run the code below to plot the decision boundary.
###Code
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
tutorials/Certification_Trainings/Healthcare/22.CPT_Entity_Resolver.ipynb | ###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/22.CPT_Entity_Resolver.ipynb) CPT Entity Resolvers with sBert
###Code
import json, os
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
# Defining license key-value pairs as local variables
locals().update(license_keys)
# Adding license key-value pairs to environment variables
os.environ.update(license_keys)
# Installing pyspark and spark-nlp
! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION
# Installing Spark NLP Healthcare
! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET
# Installing Spark NLP Display Library for visualization
! pip install -q spark-nlp-display
import json
import os
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import SparkSession
import sparknlp
import sparknlp_jsl
import sys, os, time
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.util import *
from sparknlp_jsl.annotator import *
from sparknlp.pretrained import ResourceDownloader
from pyspark.sql import functions as F
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print (sparknlp.version())
print (sparknlp_jsl.version())
###Output
3.3.0
3.3.0
###Markdown
Named Entity Recognition
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
])
data_ner = spark.createDataFrame([['']]).toDF("text")
ner_model = ner_pipeline.fit(data_ner)
ner_light_pipeline = LightPipeline(ner_model)
clinical_note = (
'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years '
'prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior '
'episode of HTG-induced pancreatitis three years prior to presentation, associated '
'with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, '
'presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. '
'Two weeks prior to presentation, she was treated with a five-day course of amoxicillin '
'for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin '
'for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months '
'at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; '
'significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent '
'laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, '
'creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) '
'10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed '
'as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for '
'starvation ketosis, as she reported poor oral intake for three days prior to admission. However, '
'serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap '
'was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and '
'lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - '
'the original sample was centrifuged and the chylomicron layer removed prior to analysis due to '
'interference from turbidity caused by lipemia again. The patient was treated with an insulin drip '
'for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within '
'24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting '
'of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on '
'40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg '
'two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She '
'had close follow-up with endocrinology post discharge.'
)
from sparknlp_display import NerVisualizer
visualiser = NerVisualizer()
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080', 'TEST':'#800080', 'TREATMENT':'#806080'})
# Set label filter
#visualiser.display(ppres, label_col='ner_chunk', labels=['PER'])
visualiser.display(ner_light_pipeline.fullAnnotate(clinical_note)[0], label_col='ner_chunk', document_col='document')
###Output
_____no_output_____
###Markdown
CPT Resolver
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['Test','Procedure'])
c2doc = Chunk2Doc()\
.setInputCols("ner_chunk")\
.setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_pipeline_cpt = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
cpt_resolver])
text = '''
EXAM: Left heart cath, selective coronary angiogram, right common femoral angiogram, and StarClose closure of right common femoral artery.
REASON FOR EXAM: Abnormal stress test and episode of shortness of breath.
PROCEDURE: Right common femoral artery, 6-French sheath, JL4, JR4, and pigtail catheters were used.
FINDINGS:
1. Left main is a large-caliber vessel. It is angiographically free of disease,
2. LAD is a large-caliber vessel. It gives rise to two diagonals and septal perforator. It erupts around the apex. LAD shows an area of 60% to 70% stenosis probably in its mid portion. The lesion is a type A finishing before the takeoff of diagonal 1. The rest of the vessel is angiographically free of disease.
3. Diagonal 1 and diagonal 2 are angiographically free of disease.
4. Left circumflex is a small-to-moderate caliber vessel, gives rise to 1 OM. It is angiographically free of disease.
5. OM-1 is angiographically free of disease.
6. RCA is a large, dominant vessel, gives rise to conus, RV marginal, PDA and one PL. RCA has a tortuous course and it has a 30% to 40% stenosis in its proximal portion.
7. LVEDP is measured 40 mmHg.
8. No gradient between LV and aorta is noted.
Due to contrast concern due to renal function, no LV gram was performed.
Following this, right common femoral angiogram was performed followed by StarClose closure of the right common femoral artery.
'''
data_ner = spark.createDataFrame([[text]]).toDF("text")
sbert_models = sbert_pipeline_cpt.fit(data_ner)
sbert_outputs = sbert_models.transform(data_ner)
from pyspark.sql import functions as F
cpt_sdf = sbert_outputs.select(F.explode(F.arrays_zip("ner_chunk.result","ner_chunk.metadata","cpt_code.result","cpt_code.metadata","ner_chunk.begin","ner_chunk.end")).alias("cpt_code")) \
.select(F.expr("cpt_code['0']").alias("chunk"),
F.expr("cpt_code['4']").alias("begin"),
F.expr("cpt_code['5']").alias("end"),
F.expr("cpt_code['1'].entity").alias("entity"),
F.expr("cpt_code['2']").alias("code"),
F.expr("cpt_code['3'].confidence").alias("confidence"),
F.expr("cpt_code['3'].all_k_resolutions").alias("all_k_resolutions"),
F.expr("cpt_code['3'].all_k_results").alias("all_k_codes"))
cpt_sdf.show(10, truncate=100)
import pandas as pd
def get_codes (light_model, code, text):
full_light_result = light_model.fullAnnotate(text)
chunks = []
terms = []
begin = []
end = []
resolutions=[]
entity=[]
all_codes=[]
for chunk, term in zip(full_light_result[0]['ner_chunk'], full_light_result[0][code]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
terms.append(term.result)
entity.append(chunk.metadata['entity'])
resolutions.append(term.metadata['all_k_resolutions'])
all_codes.append(term.metadata['all_k_results'])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'entity':entity,
'code':terms,'resolutions':resolutions,'all_codes':all_codes})
return df
text='''
REASON FOR EXAM: Evaluate for retroperitoneal hematoma on the right side of pelvis, the patient has been following, is currently on Coumadin.
In CT abdomen, there is no evidence for a retroperitoneal hematoma, but there is an metastases on the right kidney.
The liver, spleen, adrenal glands, and pancreas are unremarkable. Within the superior pole of the left kidney, there is a 3.9 cm cystic lesion. A 3.3 cm cystic lesion is also seen within the inferior pole of the left kidney. No calcifications are noted. The kidneys are small bilaterally.
In CT pelvis, evaluation of the bladder is limited due to the presence of a Foley catheter, the bladder is nondistended. The large and small bowels are normal in course and caliber. There is no obstruction.
'''
cpt_light_pipeline = LightPipeline(sbert_models)
get_codes (cpt_light_pipeline, 'cpt_code', text)
from sparknlp_display import EntityResolverVisualizer
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'Procedure':'#008080', 'Test':'#800080'})
light_data_cpt = cpt_light_pipeline.fullAnnotate(text)
vis.display(light_data_cpt[0], 'ner_chunk', 'cpt_code')
text='''1. The left ventricular cavity size and wall thickness appear normal. The wall motion and left ventricular systolic function appears hyperdynamic with estimated ejection fraction of 70% to 75%. There is near-cavity obliteration seen. There also appears to be increased left ventricular outflow tract gradient at the mid cavity level consistent with hyperdynamic left ventricular systolic function. There is abnormal left ventricular relaxation pattern seen as well as elevated left atrial pressures seen by Doppler examination.
2. The left atrium appears mildly dilated.
3. The right atrium and right ventricle appear normal.
4. The aortic root appears normal.
5. The aortic valve appears calcified with mild aortic valve stenosis, calculated aortic valve area is 1.3 cm square with a maximum instantaneous gradient of 34 and a mean gradient of 19 mm.
6. There is mitral annular calcification extending to leaflets and supportive structures with thickening of mitral valve leaflets with mild mitral regurgitation.
7. The tricuspid valve appears normal with trace tricuspid regurgitation with moderate pulmonary artery hypertension. Estimated pulmonary artery systolic pressure is 49 mmHg. Estimated right atrial pressure of 10 mmHg.
8. The pulmonary valve appears normal with trace pulmonary insufficiency.
9. There is no pericardial effusion or intracardiac mass seen.
10. There is a color Doppler suggestive of a patent foramen ovale with lipomatous hypertrophy of the interatrial septum.
11. The study was somewhat technically limited and hence subtle abnormalities could be missed from the study.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
text='''
CC: Left hand numbness on presentation; then developed lethargy later that day.
HX: On the day of presentation, this 72 y/o RHM suddenly developed generalized weakness and lightheadedness, and could not rise from a chair. Four hours later he experienced sudden left hand numbness lasting two hours. There were no other associated symptoms except for the generalized weakness and lightheadedness. He denied vertigo.
He had been experiencing falling spells without associated LOC up to several times a month for the past year.
MEDS: procardia SR, Lasix, Ecotrin, KCL, Digoxin, Colace, Coumadin.
PMH: 1)8/92 evaluation for presyncope (Echocardiogram showed: AV fibrosis/calcification, AV stenosis/insufficiency, MV stenosis with annular calcification and regurgitation, moderate TR, Decreased LV systolic function, severe LAE. MRI brain: focal areas of increased T2 signal in the left cerebellum and in the brainstem probably representing microvascular ischemic disease.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/22.CPT_Entity_Resolver.ipynb) CPT Entity Resolvers with sBert
###Code
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
%%capture
for k,v in license_keys.items():
%set_env $k=$v
!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh
!bash jsl_colab_setup.sh
! pip install spark-nlp-display
import json
import os
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import SparkSession
import sparknlp
import sparknlp_jsl
import sys, os, time
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.util import *
from sparknlp_jsl.annotator import *
from sparknlp.pretrained import ResourceDownloader
from pyspark.sql import functions as F
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print (sparknlp.version())
print (sparknlp_jsl.version())
###Output
3.1.2
3.1.2
###Markdown
Named Entity Recognition
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
])
data_ner = spark.createDataFrame([['']]).toDF("text")
ner_model = ner_pipeline.fit(data_ner)
ner_light_pipeline = LightPipeline(ner_model)
clinical_note = (
'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years '
'prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior '
'episode of HTG-induced pancreatitis three years prior to presentation, associated '
'with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, '
'presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. '
'Two weeks prior to presentation, she was treated with a five-day course of amoxicillin '
'for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin '
'for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months '
'at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; '
'significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent '
'laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, '
'creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) '
'10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed '
'as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for '
'starvation ketosis, as she reported poor oral intake for three days prior to admission. However, '
'serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap '
'was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and '
'lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - '
'the original sample was centrifuged and the chylomicron layer removed prior to analysis due to '
'interference from turbidity caused by lipemia again. The patient was treated with an insulin drip '
'for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within '
'24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting '
'of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on '
'40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg '
'two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She '
'had close follow-up with endocrinology post discharge.'
)
from sparknlp_display import NerVisualizer
visualiser = NerVisualizer()
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080', 'TEST':'#800080', 'TREATMENT':'#806080'})
# Set label filter
#visualiser.display(ppres, label_col='ner_chunk', labels=['PER'])
visualiser.display(ner_light_pipeline.fullAnnotate(clinical_note)[0], label_col='ner_chunk', document_col='document')
###Output
_____no_output_____
###Markdown
CPT Resolver
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['Test','Procedure'])
c2doc = Chunk2Doc()\
.setInputCols("ner_chunk")\
.setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_pipeline_cpt = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
cpt_resolver])
text = '''
EXAM: Left heart cath, selective coronary angiogram, right common femoral angiogram, and StarClose closure of right common femoral artery.
REASON FOR EXAM: Abnormal stress test and episode of shortness of breath.
PROCEDURE: Right common femoral artery, 6-French sheath, JL4, JR4, and pigtail catheters were used.
FINDINGS:
1. Left main is a large-caliber vessel. It is angiographically free of disease,
2. LAD is a large-caliber vessel. It gives rise to two diagonals and septal perforator. It erupts around the apex. LAD shows an area of 60% to 70% stenosis probably in its mid portion. The lesion is a type A finishing before the takeoff of diagonal 1. The rest of the vessel is angiographically free of disease.
3. Diagonal 1 and diagonal 2 are angiographically free of disease.
4. Left circumflex is a small-to-moderate caliber vessel, gives rise to 1 OM. It is angiographically free of disease.
5. OM-1 is angiographically free of disease.
6. RCA is a large, dominant vessel, gives rise to conus, RV marginal, PDA and one PL. RCA has a tortuous course and it has a 30% to 40% stenosis in its proximal portion.
7. LVEDP is measured 40 mmHg.
8. No gradient between LV and aorta is noted.
Due to contrast concern due to renal function, no LV gram was performed.
Following this, right common femoral angiogram was performed followed by StarClose closure of the right common femoral artery.
'''
data_ner = spark.createDataFrame([[text]]).toDF("text")
sbert_models = sbert_pipeline_cpt.fit(data_ner)
sbert_outputs = sbert_models.transform(data_ner)
from pyspark.sql import functions as F
cpt_sdf = sbert_outputs.select(F.explode(F.arrays_zip("ner_chunk.result","ner_chunk.metadata","cpt_code.result","cpt_code.metadata","ner_chunk.begin","ner_chunk.end")).alias("cpt_code")) \
.select(F.expr("cpt_code['0']").alias("chunk"),
F.expr("cpt_code['4']").alias("begin"),
F.expr("cpt_code['5']").alias("end"),
F.expr("cpt_code['1'].entity").alias("entity"),
F.expr("cpt_code['2']").alias("code"),
F.expr("cpt_code['3'].confidence").alias("confidence"),
F.expr("cpt_code['3'].all_k_resolutions").alias("all_k_resolutions"),
F.expr("cpt_code['3'].all_k_results").alias("all_k_codes"))
cpt_sdf.show(10, truncate=100)
import pandas as pd
def get_codes (light_model, code, text):
full_light_result = light_model.fullAnnotate(text)
chunks = []
terms = []
begin = []
end = []
resolutions=[]
entity=[]
all_codes=[]
for chunk, term in zip(full_light_result[0]['ner_chunk'], full_light_result[0][code]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
terms.append(term.result)
entity.append(chunk.metadata['entity'])
resolutions.append(term.metadata['all_k_resolutions'])
all_codes.append(term.metadata['all_k_results'])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'entity':entity,
'code':terms,'resolutions':resolutions,'all_codes':all_codes})
return df
text='''
REASON FOR EXAM: Evaluate for retroperitoneal hematoma on the right side of pelvis, the patient has been following, is currently on Coumadin.
In CT abdomen, there is no evidence for a retroperitoneal hematoma, but there is an metastases on the right kidney.
The liver, spleen, adrenal glands, and pancreas are unremarkable. Within the superior pole of the left kidney, there is a 3.9 cm cystic lesion. A 3.3 cm cystic lesion is also seen within the inferior pole of the left kidney. No calcifications are noted. The kidneys are small bilaterally.
In CT pelvis, evaluation of the bladder is limited due to the presence of a Foley catheter, the bladder is nondistended. The large and small bowels are normal in course and caliber. There is no obstruction.
'''
cpt_light_pipeline = LightPipeline(sbert_models)
get_codes (cpt_light_pipeline, 'cpt_code', text)
from sparknlp_display import EntityResolverVisualizer
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'Procedure':'#008080', 'Test':'#800080'})
light_data_cpt = cpt_light_pipeline.fullAnnotate(text)
vis.display(light_data_cpt[0], 'ner_chunk', 'cpt_code')
text='''1. The left ventricular cavity size and wall thickness appear normal. The wall motion and left ventricular systolic function appears hyperdynamic with estimated ejection fraction of 70% to 75%. There is near-cavity obliteration seen. There also appears to be increased left ventricular outflow tract gradient at the mid cavity level consistent with hyperdynamic left ventricular systolic function. There is abnormal left ventricular relaxation pattern seen as well as elevated left atrial pressures seen by Doppler examination.
2. The left atrium appears mildly dilated.
3. The right atrium and right ventricle appear normal.
4. The aortic root appears normal.
5. The aortic valve appears calcified with mild aortic valve stenosis, calculated aortic valve area is 1.3 cm square with a maximum instantaneous gradient of 34 and a mean gradient of 19 mm.
6. There is mitral annular calcification extending to leaflets and supportive structures with thickening of mitral valve leaflets with mild mitral regurgitation.
7. The tricuspid valve appears normal with trace tricuspid regurgitation with moderate pulmonary artery hypertension. Estimated pulmonary artery systolic pressure is 49 mmHg. Estimated right atrial pressure of 10 mmHg.
8. The pulmonary valve appears normal with trace pulmonary insufficiency.
9. There is no pericardial effusion or intracardiac mass seen.
10. There is a color Doppler suggestive of a patent foramen ovale with lipomatous hypertrophy of the interatrial septum.
11. The study was somewhat technically limited and hence subtle abnormalities could be missed from the study.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
text='''
CC: Left hand numbness on presentation; then developed lethargy later that day.
HX: On the day of presentation, this 72 y/o RHM suddenly developed generalized weakness and lightheadedness, and could not rise from a chair. Four hours later he experienced sudden left hand numbness lasting two hours. There were no other associated symptoms except for the generalized weakness and lightheadedness. He denied vertigo.
He had been experiencing falling spells without associated LOC up to several times a month for the past year.
MEDS: procardia SR, Lasix, Ecotrin, KCL, Digoxin, Colace, Coumadin.
PMH: 1)8/92 evaluation for presyncope (Echocardiogram showed: AV fibrosis/calcification, AV stenosis/insufficiency, MV stenosis with annular calcification and regurgitation, moderate TR, Decreased LV systolic function, severe LAE. MRI brain: focal areas of increased T2 signal in the left cerebellum and in the brainstem probably representing microvascular ischemic disease.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/22.CPT_Entity_Resolver.ipynb) CPT Entity Resolvers with sBert
###Code
import json, os
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
# Defining license key-value pairs as local variables
locals().update(license_keys)
# Adding license key-value pairs to environment variables
os.environ.update(license_keys)
# Installing pyspark and spark-nlp
! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION
# Installing Spark NLP Healthcare
! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET
# Installing Spark NLP Display Library for visualization
! pip install -q spark-nlp-display
import json
import os
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import SparkSession
import sparknlp
import sparknlp_jsl
import sys, os, time
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.util import *
from sparknlp_jsl.annotator import *
from sparknlp.pretrained import ResourceDownloader
from pyspark.sql import functions as F
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print (sparknlp.version())
print (sparknlp_jsl.version())
###Output
3.4.0
3.4.0
###Markdown
Named Entity Recognition
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
])
data_ner = spark.createDataFrame([['']]).toDF("text")
ner_model = ner_pipeline.fit(data_ner)
ner_light_pipeline = LightPipeline(ner_model)
clinical_note = (
'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years '
'prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior '
'episode of HTG-induced pancreatitis three years prior to presentation, associated '
'with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, '
'presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. '
'Two weeks prior to presentation, she was treated with a five-day course of amoxicillin '
'for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin '
'for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months '
'at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; '
'significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent '
'laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, '
'creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) '
'10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed '
'as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for '
'starvation ketosis, as she reported poor oral intake for three days prior to admission. However, '
'serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap '
'was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and '
'lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - '
'the original sample was centrifuged and the chylomicron layer removed prior to analysis due to '
'interference from turbidity caused by lipemia again. The patient was treated with an insulin drip '
'for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within '
'24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting '
'of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on '
'40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg '
'two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She '
'had close follow-up with endocrinology post discharge.'
)
from sparknlp_display import NerVisualizer
visualiser = NerVisualizer()
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080', 'TEST':'#800080', 'TREATMENT':'#806080'})
# Set label filter
#visualiser.display(ppres, label_col='ner_chunk', labels=['PER'])
visualiser.display(ner_light_pipeline.fullAnnotate(clinical_note)[0], label_col='ner_chunk', document_col='document')
###Output
_____no_output_____
###Markdown
CPT Resolver
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['Test','Procedure'])
c2doc = Chunk2Doc()\
.setInputCols("ner_chunk")\
.setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_pipeline_cpt = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
cpt_resolver])
text = '''
EXAM: Left heart cath, selective coronary angiogram, right common femoral angiogram, and StarClose closure of right common femoral artery.
REASON FOR EXAM: Abnormal stress test and episode of shortness of breath.
PROCEDURE: Right common femoral artery, 6-French sheath, JL4, JR4, and pigtail catheters were used.
FINDINGS:
1. Left main is a large-caliber vessel. It is angiographically free of disease,
2. LAD is a large-caliber vessel. It gives rise to two diagonals and septal perforator. It erupts around the apex. LAD shows an area of 60% to 70% stenosis probably in its mid portion. The lesion is a type A finishing before the takeoff of diagonal 1. The rest of the vessel is angiographically free of disease.
3. Diagonal 1 and diagonal 2 are angiographically free of disease.
4. Left circumflex is a small-to-moderate caliber vessel, gives rise to 1 OM. It is angiographically free of disease.
5. OM-1 is angiographically free of disease.
6. RCA is a large, dominant vessel, gives rise to conus, RV marginal, PDA and one PL. RCA has a tortuous course and it has a 30% to 40% stenosis in its proximal portion.
7. LVEDP is measured 40 mmHg.
8. No gradient between LV and aorta is noted.
Due to contrast concern due to renal function, no LV gram was performed.
Following this, right common femoral angiogram was performed followed by StarClose closure of the right common femoral artery.
'''
data_ner = spark.createDataFrame([[text]]).toDF("text")
sbert_models = sbert_pipeline_cpt.fit(data_ner)
sbert_outputs = sbert_models.transform(data_ner)
from pyspark.sql import functions as F
cpt_sdf = sbert_outputs.select(F.explode(F.arrays_zip("ner_chunk.result","ner_chunk.metadata","cpt_code.result","cpt_code.metadata","ner_chunk.begin","ner_chunk.end")).alias("cpt_code")) \
.select(F.expr("cpt_code['0']").alias("chunk"),
F.expr("cpt_code['4']").alias("begin"),
F.expr("cpt_code['5']").alias("end"),
F.expr("cpt_code['1'].entity").alias("entity"),
F.expr("cpt_code['2']").alias("code"),
F.expr("cpt_code['3'].confidence").alias("confidence"),
F.expr("cpt_code['3'].all_k_resolutions").alias("all_k_resolutions"),
F.expr("cpt_code['3'].all_k_results").alias("all_k_codes"))
cpt_sdf.show(10, truncate=100)
import pandas as pd
def get_codes (light_model, code, text):
full_light_result = light_model.fullAnnotate(text)
chunks = []
terms = []
begin = []
end = []
resolutions=[]
entity=[]
all_codes=[]
for chunk, term in zip(full_light_result[0]['ner_chunk'], full_light_result[0][code]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
terms.append(term.result)
entity.append(chunk.metadata['entity'])
resolutions.append(term.metadata['all_k_resolutions'])
all_codes.append(term.metadata['all_k_results'])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'entity':entity,
'code':terms,'resolutions':resolutions,'all_codes':all_codes})
return df
text='''
REASON FOR EXAM: Evaluate for retroperitoneal hematoma on the right side of pelvis, the patient has been following, is currently on Coumadin.
In CT abdomen, there is no evidence for a retroperitoneal hematoma, but there is an metastases on the right kidney.
The liver, spleen, adrenal glands, and pancreas are unremarkable. Within the superior pole of the left kidney, there is a 3.9 cm cystic lesion. A 3.3 cm cystic lesion is also seen within the inferior pole of the left kidney. No calcifications are noted. The kidneys are small bilaterally.
In CT pelvis, evaluation of the bladder is limited due to the presence of a Foley catheter, the bladder is nondistended. The large and small bowels are normal in course and caliber. There is no obstruction.
'''
cpt_light_pipeline = LightPipeline(sbert_models)
get_codes (cpt_light_pipeline, 'cpt_code', text)
from sparknlp_display import EntityResolverVisualizer
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'Procedure':'#008080', 'Test':'#800080'})
light_data_cpt = cpt_light_pipeline.fullAnnotate(text)
vis.display(light_data_cpt[0], 'ner_chunk', 'cpt_code')
text='''1. The left ventricular cavity size and wall thickness appear normal. The wall motion and left ventricular systolic function appears hyperdynamic with estimated ejection fraction of 70% to 75%. There is near-cavity obliteration seen. There also appears to be increased left ventricular outflow tract gradient at the mid cavity level consistent with hyperdynamic left ventricular systolic function. There is abnormal left ventricular relaxation pattern seen as well as elevated left atrial pressures seen by Doppler examination.
2. The left atrium appears mildly dilated.
3. The right atrium and right ventricle appear normal.
4. The aortic root appears normal.
5. The aortic valve appears calcified with mild aortic valve stenosis, calculated aortic valve area is 1.3 cm square with a maximum instantaneous gradient of 34 and a mean gradient of 19 mm.
6. There is mitral annular calcification extending to leaflets and supportive structures with thickening of mitral valve leaflets with mild mitral regurgitation.
7. The tricuspid valve appears normal with trace tricuspid regurgitation with moderate pulmonary artery hypertension. Estimated pulmonary artery systolic pressure is 49 mmHg. Estimated right atrial pressure of 10 mmHg.
8. The pulmonary valve appears normal with trace pulmonary insufficiency.
9. There is no pericardial effusion or intracardiac mass seen.
10. There is a color Doppler suggestive of a patent foramen ovale with lipomatous hypertrophy of the interatrial septum.
11. The study was somewhat technically limited and hence subtle abnormalities could be missed from the study.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
text='''
CC: Left hand numbness on presentation; then developed lethargy later that day.
HX: On the day of presentation, this 72 y/o RHM suddenly developed generalized weakness and lightheadedness, and could not rise from a chair. Four hours later he experienced sudden left hand numbness lasting two hours. There were no other associated symptoms except for the generalized weakness and lightheadedness. He denied vertigo.
He had been experiencing falling spells without associated LOC up to several times a month for the past year.
MEDS: procardia SR, Lasix, Ecotrin, KCL, Digoxin, Colace, Coumadin.
PMH: 1)8/92 evaluation for presyncope (Echocardiogram showed: AV fibrosis/calcification, AV stenosis/insufficiency, MV stenosis with annular calcification and regurgitation, moderate TR, Decreased LV systolic function, severe LAE. MRI brain: focal areas of increased T2 signal in the left cerebellum and in the brainstem probably representing microvascular ischemic disease.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/22.CPT_Entity_Resolver.ipynb) CPT Entity Resolvers with sBert
###Code
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
license_keys.keys()
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
jsl_version = license_keys['JSL_VERSION']
version = license_keys['PUBLIC_VERSION']
! pip install --ignore-installed -q pyspark==2.4.4
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
! pip install --ignore-installed -q spark-nlp==$version
! pip -q install spark-nlp-display
import sparknlp
print (sparknlp.version())
import json
import os
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import sparknlp_jsl
import sys, os, time
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.util import *
from sparknlp_jsl.annotator import *
from sparknlp.pretrained import ResourceDownloader
from pyspark.sql import functions as F
from pyspark.ml import Pipeline, PipelineModel
spark = sparknlp_jsl.start(secret)
# sparknlp_jsl.start(secret, public=version) if you want to start with different version of public sparknlp
###Output
_____no_output_____
###Markdown
Named Entity Recognition
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = NerDLModel.pretrained("jsl_ner_wip_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
])
data_ner = spark.createDataFrame([['']]).toDF("text")
ner_model = ner_pipeline.fit(data_ner)
ner_light_pipeline = LightPipeline(ner_model)
clinical_note = (
'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years '
'prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior '
'episode of HTG-induced pancreatitis three years prior to presentation, associated '
'with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, '
'presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. '
'Two weeks prior to presentation, she was treated with a five-day course of amoxicillin '
'for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin '
'for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months '
'at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; '
'significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent '
'laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, '
'creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) '
'10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed '
'as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for '
'starvation ketosis, as she reported poor oral intake for three days prior to admission. However, '
'serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap '
'was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and '
'lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - '
'the original sample was centrifuged and the chylomicron layer removed prior to analysis due to '
'interference from turbidity caused by lipemia again. The patient was treated with an insulin drip '
'for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within '
'24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting '
'of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on '
'40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg '
'two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She '
'had close follow-up with endocrinology post discharge.'
)
from sparknlp_display import NerVisualizer
visualiser = NerVisualizer()
print ('Standard Output')
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080', 'TEST':'#800080', 'TREATMENT':'#806080'})
# Set label filter
#visualiser.display(ppres, label_col='ner_chunk', labels=['PER'])
visualiser.display(ner_light_pipeline.fullAnnotate(clinical_note)[0], label_col='ner_chunk', document_col='document')
###Output
Standard Output
###Markdown
CPT Resolver
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = NerDLModel.pretrained("jsl_ner_wip_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['Test','Procedure'])
c2doc = Chunk2Doc().setInputCols("ner_chunk").setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_pipeline_cpt = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
cpt_resolver])
text = '''
EXAM: Left heart cath, selective coronary angiogram, right common femoral angiogram, and StarClose closure of right common femoral artery.
REASON FOR EXAM: Abnormal stress test and episode of shortness of breath.
PROCEDURE: Right common femoral artery, 6-French sheath, JL4, JR4, and pigtail catheters were used.
FINDINGS:
1. Left main is a large-caliber vessel. It is angiographically free of disease,
2. LAD is a large-caliber vessel. It gives rise to two diagonals and septal perforator. It erupts around the apex. LAD shows an area of 60% to 70% stenosis probably in its mid portion. The lesion is a type A finishing before the takeoff of diagonal 1. The rest of the vessel is angiographically free of disease.
3. Diagonal 1 and diagonal 2 are angiographically free of disease.
4. Left circumflex is a small-to-moderate caliber vessel, gives rise to 1 OM. It is angiographically free of disease.
5. OM-1 is angiographically free of disease.
6. RCA is a large, dominant vessel, gives rise to conus, RV marginal, PDA and one PL. RCA has a tortuous course and it has a 30% to 40% stenosis in its proximal portion.
7. LVEDP is measured 40 mmHg.
8. No gradient between LV and aorta is noted.
Due to contrast concern due to renal function, no LV gram was performed.
Following this, right common femoral angiogram was performed followed by StarClose closure of the right common femoral artery.
'''
data_ner = spark.createDataFrame([[text]]).toDF("text")
sbert_models = sbert_pipeline_cpt.fit(data_ner)
sbert_outputs = sbert_models.transform(data_ner)
from pyspark.sql import functions as F
cpt_sdf = sbert_outputs.select(F.explode(F.arrays_zip("ner_chunk.result","ner_chunk.metadata","cpt_code.result","cpt_code.metadata","ner_chunk.begin","ner_chunk.end")).alias("cpt_code")) \
.select(F.expr("cpt_code['0']").alias("chunk"),
F.expr("cpt_code['4']").alias("begin"),
F.expr("cpt_code['5']").alias("end"),
F.expr("cpt_code['1'].entity").alias("entity"),
F.expr("cpt_code['2']").alias("code"),
F.expr("cpt_code['3'].confidence").alias("confidence"),
F.expr("cpt_code['3'].all_k_resolutions").alias("all_k_resolutions"),
F.expr("cpt_code['3'].all_k_results").alias("all_k_codes"))
cpt_sdf.show(10, truncate=100)
import pandas as pd
def get_codes (light_model, code, text):
full_light_result = light_model.fullAnnotate(text)
chunks = []
terms = []
begin = []
end = []
resolutions=[]
entity=[]
all_codes=[]
for chunk, term in zip(full_light_result[0]['ner_chunk'], full_light_result[0][code]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
terms.append(term.result)
entity.append(chunk.metadata['entity'])
resolutions.append(term.metadata['all_k_resolutions'])
all_codes.append(term.metadata['all_k_results'])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'entity':entity,
'code':terms,'resolutions':resolutions,'all_codes':all_codes})
return df
text='''
REASON FOR EXAM: Evaluate for retroperitoneal hematoma on the right side of pelvis, the patient has been following, is currently on Coumadin.
In CT abdomen, there is no evidence for a retroperitoneal hematoma, but there is an metastases on the right kidney.
The liver, spleen, adrenal glands, and pancreas are unremarkable. Within the superior pole of the left kidney, there is a 3.9 cm cystic lesion. A 3.3 cm cystic lesion is also seen within the inferior pole of the left kidney. No calcifications are noted. The kidneys are small bilaterally.
In CT pelvis, evaluation of the bladder is limited due to the presence of a Foley catheter, the bladder is nondistended. The large and small bowels are normal in course and caliber. There is no obstruction.
'''
cpt_light_pipeline = LightPipeline(sbert_models)
get_codes (cpt_light_pipeline, 'cpt_code', text)
from sparknlp_display import EntityResolverVisualizer
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'Procedure':'#008080', 'Test':'#800080'})
light_data_cpt = cpt_light_pipeline.fullAnnotate(text)
vis.display(light_data_cpt[0], 'ner_chunk', 'cpt_code')
text='''1. The left ventricular cavity size and wall thickness appear normal. The wall motion and left ventricular systolic function appears hyperdynamic with estimated ejection fraction of 70% to 75%. There is near-cavity obliteration seen. There also appears to be increased left ventricular outflow tract gradient at the mid cavity level consistent with hyperdynamic left ventricular systolic function. There is abnormal left ventricular relaxation pattern seen as well as elevated left atrial pressures seen by Doppler examination.
2. The left atrium appears mildly dilated.
3. The right atrium and right ventricle appear normal.
4. The aortic root appears normal.
5. The aortic valve appears calcified with mild aortic valve stenosis, calculated aortic valve area is 1.3 cm square with a maximum instantaneous gradient of 34 and a mean gradient of 19 mm.
6. There is mitral annular calcification extending to leaflets and supportive structures with thickening of mitral valve leaflets with mild mitral regurgitation.
7. The tricuspid valve appears normal with trace tricuspid regurgitation with moderate pulmonary artery hypertension. Estimated pulmonary artery systolic pressure is 49 mmHg. Estimated right atrial pressure of 10 mmHg.
8. The pulmonary valve appears normal with trace pulmonary insufficiency.
9. There is no pericardial effusion or intracardiac mass seen.
10. There is a color Doppler suggestive of a patent foramen ovale with lipomatous hypertrophy of the interatrial septum.
11. The study was somewhat technically limited and hence subtle abnormalities could be missed from the study.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
text='''
CC: Left hand numbness on presentation; then developed lethargy later that day.
HX: On the day of presentation, this 72 y/o RHM suddenly developed generalized weakness and lightheadedness, and could not rise from a chair. Four hours later he experienced sudden left hand numbness lasting two hours. There were no other associated symptoms except for the generalized weakness and lightheadedness. He denied vertigo.
He had been experiencing falling spells without associated LOC up to several times a month for the past year.
MEDS: procardia SR, Lasix, Ecotrin, KCL, Digoxin, Colace, Coumadin.
PMH: 1)8/92 evaluation for presyncope (Echocardiogram showed: AV fibrosis/calcification, AV stenosis/insufficiency, MV stenosis with annular calcification and regurgitation, moderate TR, Decreased LV systolic function, severe LAE. MRI brain: focal areas of increased T2 signal in the left cerebellum and in the brainstem probably representing microvascular ischemic disease.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/22.CPT_Entity_Resolver.ipynb) CPT Entity Resolvers with sBert
###Code
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
license_keys.keys()
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
jsl_version = license_keys['JSL_VERSION']
version = license_keys['PUBLIC_VERSION']
! pip install --ignore-installed -q pyspark==2.4.4
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
! pip install --ignore-installed -q spark-nlp==$version
! pip -q install spark-nlp-display
import sparknlp
print (sparknlp.version())
import json
import os
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import sparknlp_jsl
import sys, os, time
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.util import *
from sparknlp_jsl.annotator import *
from sparknlp.pretrained import ResourceDownloader
from pyspark.sql import functions as F
from pyspark.ml import Pipeline, PipelineModel
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(secret, params=params)
# sparknlp_jsl.start(secret, public=version) if you want to start with different version of public sparknlp
###Output
_____no_output_____
###Markdown
Named Entity Recognition
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = NerDLModel.pretrained("jsl_ner_wip_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
])
data_ner = spark.createDataFrame([['']]).toDF("text")
ner_model = ner_pipeline.fit(data_ner)
ner_light_pipeline = LightPipeline(ner_model)
clinical_note = (
'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years '
'prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior '
'episode of HTG-induced pancreatitis three years prior to presentation, associated '
'with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, '
'presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. '
'Two weeks prior to presentation, she was treated with a five-day course of amoxicillin '
'for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin '
'for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months '
'at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; '
'significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent '
'laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, '
'creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) '
'10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed '
'as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for '
'starvation ketosis, as she reported poor oral intake for three days prior to admission. However, '
'serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap '
'was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and '
'lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - '
'the original sample was centrifuged and the chylomicron layer removed prior to analysis due to '
'interference from turbidity caused by lipemia again. The patient was treated with an insulin drip '
'for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within '
'24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting '
'of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on '
'40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg '
'two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She '
'had close follow-up with endocrinology post discharge.'
)
from sparknlp_display import NerVisualizer
visualiser = NerVisualizer()
print ('Standard Output')
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080', 'TEST':'#800080', 'TREATMENT':'#806080'})
# Set label filter
#visualiser.display(ppres, label_col='ner_chunk', labels=['PER'])
visualiser.display(ner_light_pipeline.fullAnnotate(clinical_note)[0], label_col='ner_chunk', document_col='document')
###Output
Standard Output
###Markdown
CPT Resolver
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = NerDLModel.pretrained("jsl_ner_wip_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['Test','Procedure'])
c2doc = Chunk2Doc().setInputCols("ner_chunk").setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_pipeline_cpt = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
cpt_resolver])
text = '''
EXAM: Left heart cath, selective coronary angiogram, right common femoral angiogram, and StarClose closure of right common femoral artery.
REASON FOR EXAM: Abnormal stress test and episode of shortness of breath.
PROCEDURE: Right common femoral artery, 6-French sheath, JL4, JR4, and pigtail catheters were used.
FINDINGS:
1. Left main is a large-caliber vessel. It is angiographically free of disease,
2. LAD is a large-caliber vessel. It gives rise to two diagonals and septal perforator. It erupts around the apex. LAD shows an area of 60% to 70% stenosis probably in its mid portion. The lesion is a type A finishing before the takeoff of diagonal 1. The rest of the vessel is angiographically free of disease.
3. Diagonal 1 and diagonal 2 are angiographically free of disease.
4. Left circumflex is a small-to-moderate caliber vessel, gives rise to 1 OM. It is angiographically free of disease.
5. OM-1 is angiographically free of disease.
6. RCA is a large, dominant vessel, gives rise to conus, RV marginal, PDA and one PL. RCA has a tortuous course and it has a 30% to 40% stenosis in its proximal portion.
7. LVEDP is measured 40 mmHg.
8. No gradient between LV and aorta is noted.
Due to contrast concern due to renal function, no LV gram was performed.
Following this, right common femoral angiogram was performed followed by StarClose closure of the right common femoral artery.
'''
data_ner = spark.createDataFrame([[text]]).toDF("text")
sbert_models = sbert_pipeline_cpt.fit(data_ner)
sbert_outputs = sbert_models.transform(data_ner)
from pyspark.sql import functions as F
cpt_sdf = sbert_outputs.select(F.explode(F.arrays_zip("ner_chunk.result","ner_chunk.metadata","cpt_code.result","cpt_code.metadata","ner_chunk.begin","ner_chunk.end")).alias("cpt_code")) \
.select(F.expr("cpt_code['0']").alias("chunk"),
F.expr("cpt_code['4']").alias("begin"),
F.expr("cpt_code['5']").alias("end"),
F.expr("cpt_code['1'].entity").alias("entity"),
F.expr("cpt_code['2']").alias("code"),
F.expr("cpt_code['3'].confidence").alias("confidence"),
F.expr("cpt_code['3'].all_k_resolutions").alias("all_k_resolutions"),
F.expr("cpt_code['3'].all_k_results").alias("all_k_codes"))
cpt_sdf.show(10, truncate=100)
import pandas as pd
def get_codes (light_model, code, text):
full_light_result = light_model.fullAnnotate(text)
chunks = []
terms = []
begin = []
end = []
resolutions=[]
entity=[]
all_codes=[]
for chunk, term in zip(full_light_result[0]['ner_chunk'], full_light_result[0][code]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
terms.append(term.result)
entity.append(chunk.metadata['entity'])
resolutions.append(term.metadata['all_k_resolutions'])
all_codes.append(term.metadata['all_k_results'])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'entity':entity,
'code':terms,'resolutions':resolutions,'all_codes':all_codes})
return df
text='''
REASON FOR EXAM: Evaluate for retroperitoneal hematoma on the right side of pelvis, the patient has been following, is currently on Coumadin.
In CT abdomen, there is no evidence for a retroperitoneal hematoma, but there is an metastases on the right kidney.
The liver, spleen, adrenal glands, and pancreas are unremarkable. Within the superior pole of the left kidney, there is a 3.9 cm cystic lesion. A 3.3 cm cystic lesion is also seen within the inferior pole of the left kidney. No calcifications are noted. The kidneys are small bilaterally.
In CT pelvis, evaluation of the bladder is limited due to the presence of a Foley catheter, the bladder is nondistended. The large and small bowels are normal in course and caliber. There is no obstruction.
'''
cpt_light_pipeline = LightPipeline(sbert_models)
get_codes (cpt_light_pipeline, 'cpt_code', text)
from sparknlp_display import EntityResolverVisualizer
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'Procedure':'#008080', 'Test':'#800080'})
light_data_cpt = cpt_light_pipeline.fullAnnotate(text)
vis.display(light_data_cpt[0], 'ner_chunk', 'cpt_code')
text='''1. The left ventricular cavity size and wall thickness appear normal. The wall motion and left ventricular systolic function appears hyperdynamic with estimated ejection fraction of 70% to 75%. There is near-cavity obliteration seen. There also appears to be increased left ventricular outflow tract gradient at the mid cavity level consistent with hyperdynamic left ventricular systolic function. There is abnormal left ventricular relaxation pattern seen as well as elevated left atrial pressures seen by Doppler examination.
2. The left atrium appears mildly dilated.
3. The right atrium and right ventricle appear normal.
4. The aortic root appears normal.
5. The aortic valve appears calcified with mild aortic valve stenosis, calculated aortic valve area is 1.3 cm square with a maximum instantaneous gradient of 34 and a mean gradient of 19 mm.
6. There is mitral annular calcification extending to leaflets and supportive structures with thickening of mitral valve leaflets with mild mitral regurgitation.
7. The tricuspid valve appears normal with trace tricuspid regurgitation with moderate pulmonary artery hypertension. Estimated pulmonary artery systolic pressure is 49 mmHg. Estimated right atrial pressure of 10 mmHg.
8. The pulmonary valve appears normal with trace pulmonary insufficiency.
9. There is no pericardial effusion or intracardiac mass seen.
10. There is a color Doppler suggestive of a patent foramen ovale with lipomatous hypertrophy of the interatrial septum.
11. The study was somewhat technically limited and hence subtle abnormalities could be missed from the study.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
text='''
CC: Left hand numbness on presentation; then developed lethargy later that day.
HX: On the day of presentation, this 72 y/o RHM suddenly developed generalized weakness and lightheadedness, and could not rise from a chair. Four hours later he experienced sudden left hand numbness lasting two hours. There were no other associated symptoms except for the generalized weakness and lightheadedness. He denied vertigo.
He had been experiencing falling spells without associated LOC up to several times a month for the past year.
MEDS: procardia SR, Lasix, Ecotrin, KCL, Digoxin, Colace, Coumadin.
PMH: 1)8/92 evaluation for presyncope (Echocardiogram showed: AV fibrosis/calcification, AV stenosis/insufficiency, MV stenosis with annular calcification and regurgitation, moderate TR, Decreased LV systolic function, severe LAE. MRI brain: focal areas of increased T2 signal in the left cerebellum and in the brainstem probably representing microvascular ischemic disease.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/22.CPT_Entity_Resolver.ipynb) CPT Entity Resolvers with sBert
###Code
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
%%capture
for k,v in license_keys.items():
%set_env $k=$v
!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh
!bash jsl_colab_setup.sh
! pip install spark-nlp-display
import json
import os
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import SparkSession
import sparknlp
import sparknlp_jsl
import sys, os, time
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.util import *
from sparknlp_jsl.annotator import *
from sparknlp.pretrained import ResourceDownloader
from pyspark.sql import functions as F
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print (sparknlp.version())
print (sparknlp_jsl.version())
###Output
3.0.1
3.0.0
###Markdown
Named Entity Recognition
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("jsl_ner_wip_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
])
data_ner = spark.createDataFrame([['']]).toDF("text")
ner_model = ner_pipeline.fit(data_ner)
ner_light_pipeline = LightPipeline(ner_model)
clinical_note = (
'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years '
'prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior '
'episode of HTG-induced pancreatitis three years prior to presentation, associated '
'with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, '
'presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. '
'Two weeks prior to presentation, she was treated with a five-day course of amoxicillin '
'for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin '
'for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months '
'at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; '
'significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent '
'laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, '
'creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) '
'10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed '
'as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for '
'starvation ketosis, as she reported poor oral intake for three days prior to admission. However, '
'serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap '
'was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and '
'lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - '
'the original sample was centrifuged and the chylomicron layer removed prior to analysis due to '
'interference from turbidity caused by lipemia again. The patient was treated with an insulin drip '
'for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within '
'24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting '
'of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on '
'40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg '
'two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She '
'had close follow-up with endocrinology post discharge.'
)
from sparknlp_display import NerVisualizer
visualiser = NerVisualizer()
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080', 'TEST':'#800080', 'TREATMENT':'#806080'})
# Set label filter
#visualiser.display(ppres, label_col='ner_chunk', labels=['PER'])
visualiser.display(ner_light_pipeline.fullAnnotate(clinical_note)[0], label_col='ner_chunk', document_col='document')
###Output
_____no_output_____
###Markdown
CPT Resolver
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("jsl_ner_wip_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['Test','Procedure'])
c2doc = Chunk2Doc()\
.setInputCols("ner_chunk")\
.setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_pipeline_cpt = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
cpt_resolver])
text = '''
EXAM: Left heart cath, selective coronary angiogram, right common femoral angiogram, and StarClose closure of right common femoral artery.
REASON FOR EXAM: Abnormal stress test and episode of shortness of breath.
PROCEDURE: Right common femoral artery, 6-French sheath, JL4, JR4, and pigtail catheters were used.
FINDINGS:
1. Left main is a large-caliber vessel. It is angiographically free of disease,
2. LAD is a large-caliber vessel. It gives rise to two diagonals and septal perforator. It erupts around the apex. LAD shows an area of 60% to 70% stenosis probably in its mid portion. The lesion is a type A finishing before the takeoff of diagonal 1. The rest of the vessel is angiographically free of disease.
3. Diagonal 1 and diagonal 2 are angiographically free of disease.
4. Left circumflex is a small-to-moderate caliber vessel, gives rise to 1 OM. It is angiographically free of disease.
5. OM-1 is angiographically free of disease.
6. RCA is a large, dominant vessel, gives rise to conus, RV marginal, PDA and one PL. RCA has a tortuous course and it has a 30% to 40% stenosis in its proximal portion.
7. LVEDP is measured 40 mmHg.
8. No gradient between LV and aorta is noted.
Due to contrast concern due to renal function, no LV gram was performed.
Following this, right common femoral angiogram was performed followed by StarClose closure of the right common femoral artery.
'''
data_ner = spark.createDataFrame([[text]]).toDF("text")
sbert_models = sbert_pipeline_cpt.fit(data_ner)
sbert_outputs = sbert_models.transform(data_ner)
from pyspark.sql import functions as F
cpt_sdf = sbert_outputs.select(F.explode(F.arrays_zip("ner_chunk.result","ner_chunk.metadata","cpt_code.result","cpt_code.metadata","ner_chunk.begin","ner_chunk.end")).alias("cpt_code")) \
.select(F.expr("cpt_code['0']").alias("chunk"),
F.expr("cpt_code['4']").alias("begin"),
F.expr("cpt_code['5']").alias("end"),
F.expr("cpt_code['1'].entity").alias("entity"),
F.expr("cpt_code['2']").alias("code"),
F.expr("cpt_code['3'].confidence").alias("confidence"),
F.expr("cpt_code['3'].all_k_resolutions").alias("all_k_resolutions"),
F.expr("cpt_code['3'].all_k_results").alias("all_k_codes"))
cpt_sdf.show(10, truncate=100)
import pandas as pd
def get_codes (light_model, code, text):
full_light_result = light_model.fullAnnotate(text)
chunks = []
terms = []
begin = []
end = []
resolutions=[]
entity=[]
all_codes=[]
for chunk, term in zip(full_light_result[0]['ner_chunk'], full_light_result[0][code]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
terms.append(term.result)
entity.append(chunk.metadata['entity'])
resolutions.append(term.metadata['all_k_resolutions'])
all_codes.append(term.metadata['all_k_results'])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'entity':entity,
'code':terms,'resolutions':resolutions,'all_codes':all_codes})
return df
text='''
REASON FOR EXAM: Evaluate for retroperitoneal hematoma on the right side of pelvis, the patient has been following, is currently on Coumadin.
In CT abdomen, there is no evidence for a retroperitoneal hematoma, but there is an metastases on the right kidney.
The liver, spleen, adrenal glands, and pancreas are unremarkable. Within the superior pole of the left kidney, there is a 3.9 cm cystic lesion. A 3.3 cm cystic lesion is also seen within the inferior pole of the left kidney. No calcifications are noted. The kidneys are small bilaterally.
In CT pelvis, evaluation of the bladder is limited due to the presence of a Foley catheter, the bladder is nondistended. The large and small bowels are normal in course and caliber. There is no obstruction.
'''
cpt_light_pipeline = LightPipeline(sbert_models)
get_codes (cpt_light_pipeline, 'cpt_code', text)
from sparknlp_display import EntityResolverVisualizer
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'Procedure':'#008080', 'Test':'#800080'})
light_data_cpt = cpt_light_pipeline.fullAnnotate(text)
vis.display(light_data_cpt[0], 'ner_chunk', 'cpt_code')
text='''1. The left ventricular cavity size and wall thickness appear normal. The wall motion and left ventricular systolic function appears hyperdynamic with estimated ejection fraction of 70% to 75%. There is near-cavity obliteration seen. There also appears to be increased left ventricular outflow tract gradient at the mid cavity level consistent with hyperdynamic left ventricular systolic function. There is abnormal left ventricular relaxation pattern seen as well as elevated left atrial pressures seen by Doppler examination.
2. The left atrium appears mildly dilated.
3. The right atrium and right ventricle appear normal.
4. The aortic root appears normal.
5. The aortic valve appears calcified with mild aortic valve stenosis, calculated aortic valve area is 1.3 cm square with a maximum instantaneous gradient of 34 and a mean gradient of 19 mm.
6. There is mitral annular calcification extending to leaflets and supportive structures with thickening of mitral valve leaflets with mild mitral regurgitation.
7. The tricuspid valve appears normal with trace tricuspid regurgitation with moderate pulmonary artery hypertension. Estimated pulmonary artery systolic pressure is 49 mmHg. Estimated right atrial pressure of 10 mmHg.
8. The pulmonary valve appears normal with trace pulmonary insufficiency.
9. There is no pericardial effusion or intracardiac mass seen.
10. There is a color Doppler suggestive of a patent foramen ovale with lipomatous hypertrophy of the interatrial septum.
11. The study was somewhat technically limited and hence subtle abnormalities could be missed from the study.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
text='''
CC: Left hand numbness on presentation; then developed lethargy later that day.
HX: On the day of presentation, this 72 y/o RHM suddenly developed generalized weakness and lightheadedness, and could not rise from a chair. Four hours later he experienced sudden left hand numbness lasting two hours. There were no other associated symptoms except for the generalized weakness and lightheadedness. He denied vertigo.
He had been experiencing falling spells without associated LOC up to several times a month for the past year.
MEDS: procardia SR, Lasix, Ecotrin, KCL, Digoxin, Colace, Coumadin.
PMH: 1)8/92 evaluation for presyncope (Echocardiogram showed: AV fibrosis/calcification, AV stenosis/insufficiency, MV stenosis with annular calcification and regurgitation, moderate TR, Decreased LV systolic function, severe LAE. MRI brain: focal areas of increased T2 signal in the left cerebellum and in the brainstem probably representing microvascular ischemic disease.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/22.CPT_Entity_Resolver.ipynb) CPT Entity Resolvers with sBert
###Code
import json, os
from google.colab import files
if 'spark_jsl.json' not in os.listdir():
license_keys = files.upload()
os.rename(list(license_keys.keys())[0], 'spark_jsl.json')
with open('spark_jsl.json') as f:
license_keys = json.load(f)
# Defining license key-value pairs as local variables
locals().update(license_keys)
os.environ.update(license_keys)
# Installing pyspark and spark-nlp
! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION
# Installing Spark NLP Healthcare
! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET
# Installing Spark NLP Display Library for visualization
! pip install -q spark-nlp-display
import json
import os
import sys, time
import sparknlp
import sparknlp_jsl
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.util import *
from sparknlp_jsl.annotator import *
from sparknlp.pretrained import ResourceDownloader
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print("Spark NLP Version :", sparknlp.version())
print("Spark NLP_JSL Version :", sparknlp_jsl.version())
spark
###Output
Spark NLP Version : 3.4.2
Spark NLP_JSL Version : 3.5.0
###Markdown
Named Entity Recognition
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
])
data_ner = spark.createDataFrame([['']]).toDF("text")
ner_model = ner_pipeline.fit(data_ner)
ner_light_pipeline = LightPipeline(ner_model)
clinical_note = (
'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years '
'prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior '
'episode of HTG-induced pancreatitis three years prior to presentation, associated '
'with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, '
'presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. '
'Two weeks prior to presentation, she was treated with a five-day course of amoxicillin '
'for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin '
'for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months '
'at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; '
'significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent '
'laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, '
'creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) '
'10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed '
'as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for '
'starvation ketosis, as she reported poor oral intake for three days prior to admission. However, '
'serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap '
'was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and '
'lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - '
'the original sample was centrifuged and the chylomicron layer removed prior to analysis due to '
'interference from turbidity caused by lipemia again. The patient was treated with an insulin drip '
'for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within '
'24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting '
'of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on '
'40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg '
'two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She '
'had close follow-up with endocrinology post discharge.'
)
from sparknlp_display import NerVisualizer
visualiser = NerVisualizer()
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080', 'TEST':'#800080', 'TREATMENT':'#806080'})
# Set label filter
#visualiser.display(ppres, label_col='ner_chunk', labels=['PER'])
visualiser.display(ner_light_pipeline.fullAnnotate(clinical_note)[0], label_col='ner_chunk', document_col='document')
###Output
_____no_output_____
###Markdown
CPT Resolver
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['Test','Procedure'])
c2doc = Chunk2Doc()\
.setInputCols("ner_chunk")\
.setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_pipeline_cpt = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
cpt_resolver])
text = '''
EXAM: Left heart cath, selective coronary angiogram, right common femoral angiogram, and StarClose closure of right common femoral artery.
REASON FOR EXAM: Abnormal stress test and episode of shortness of breath.
PROCEDURE: Right common femoral artery, 6-French sheath, JL4, JR4, and pigtail catheters were used.
FINDINGS:
1. Left main is a large-caliber vessel. It is angiographically free of disease,
2. LAD is a large-caliber vessel. It gives rise to two diagonals and septal perforator. It erupts around the apex. LAD shows an area of 60% to 70% stenosis probably in its mid portion. The lesion is a type A finishing before the takeoff of diagonal 1. The rest of the vessel is angiographically free of disease.
3. Diagonal 1 and diagonal 2 are angiographically free of disease.
4. Left circumflex is a small-to-moderate caliber vessel, gives rise to 1 OM. It is angiographically free of disease.
5. OM-1 is angiographically free of disease.
6. RCA is a large, dominant vessel, gives rise to conus, RV marginal, PDA and one PL. RCA has a tortuous course and it has a 30% to 40% stenosis in its proximal portion.
7. LVEDP is measured 40 mmHg.
8. No gradient between LV and aorta is noted.
Due to contrast concern due to renal function, no LV gram was performed.
Following this, right common femoral angiogram was performed followed by StarClose closure of the right common femoral artery.
'''
data_ner = spark.createDataFrame([[text]]).toDF("text")
sbert_models = sbert_pipeline_cpt.fit(data_ner)
sbert_outputs = sbert_models.transform(data_ner)
from pyspark.sql import functions as F
cpt_sdf = sbert_outputs.select(F.explode(F.arrays_zip("ner_chunk.result","ner_chunk.metadata","cpt_code.result","cpt_code.metadata","ner_chunk.begin","ner_chunk.end")).alias("cpt_code")) \
.select(F.expr("cpt_code['0']").alias("chunk"),
F.expr("cpt_code['4']").alias("begin"),
F.expr("cpt_code['5']").alias("end"),
F.expr("cpt_code['1'].entity").alias("entity"),
F.expr("cpt_code['2']").alias("code"),
F.expr("cpt_code['3'].confidence").alias("confidence"),
F.expr("cpt_code['3'].all_k_resolutions").alias("all_k_resolutions"),
F.expr("cpt_code['3'].all_k_results").alias("all_k_codes"))
cpt_sdf.show(10, truncate=100)
import pandas as pd
def get_codes (light_model, code, text):
full_light_result = light_model.fullAnnotate(text)
chunks = []
terms = []
begin = []
end = []
resolutions=[]
entity=[]
all_codes=[]
for chunk, term in zip(full_light_result[0]['ner_chunk'], full_light_result[0][code]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
terms.append(term.result)
entity.append(chunk.metadata['entity'])
resolutions.append(term.metadata['all_k_resolutions'])
all_codes.append(term.metadata['all_k_results'])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'entity':entity,
'code':terms,'resolutions':resolutions,'all_codes':all_codes})
return df
text='''
REASON FOR EXAM: Evaluate for retroperitoneal hematoma on the right side of pelvis, the patient has been following, is currently on Coumadin.
In CT abdomen, there is no evidence for a retroperitoneal hematoma, but there is an metastases on the right kidney.
The liver, spleen, adrenal glands, and pancreas are unremarkable. Within the superior pole of the left kidney, there is a 3.9 cm cystic lesion. A 3.3 cm cystic lesion is also seen within the inferior pole of the left kidney. No calcifications are noted. The kidneys are small bilaterally.
In CT pelvis, evaluation of the bladder is limited due to the presence of a Foley catheter, the bladder is nondistended. The large and small bowels are normal in course and caliber. There is no obstruction.
'''
cpt_light_pipeline = LightPipeline(sbert_models)
get_codes (cpt_light_pipeline, 'cpt_code', text)
from sparknlp_display import EntityResolverVisualizer
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'Procedure':'#008080', 'Test':'#800080'})
light_data_cpt = cpt_light_pipeline.fullAnnotate(text)
vis.display(light_data_cpt[0], 'ner_chunk', 'cpt_code')
text='''1. The left ventricular cavity size and wall thickness appear normal. The wall motion and left ventricular systolic function appears hyperdynamic with estimated ejection fraction of 70% to 75%. There is near-cavity obliteration seen. There also appears to be increased left ventricular outflow tract gradient at the mid cavity level consistent with hyperdynamic left ventricular systolic function. There is abnormal left ventricular relaxation pattern seen as well as elevated left atrial pressures seen by Doppler examination.
2. The left atrium appears mildly dilated.
3. The right atrium and right ventricle appear normal.
4. The aortic root appears normal.
5. The aortic valve appears calcified with mild aortic valve stenosis, calculated aortic valve area is 1.3 cm square with a maximum instantaneous gradient of 34 and a mean gradient of 19 mm.
6. There is mitral annular calcification extending to leaflets and supportive structures with thickening of mitral valve leaflets with mild mitral regurgitation.
7. The tricuspid valve appears normal with trace tricuspid regurgitation with moderate pulmonary artery hypertension. Estimated pulmonary artery systolic pressure is 49 mmHg. Estimated right atrial pressure of 10 mmHg.
8. The pulmonary valve appears normal with trace pulmonary insufficiency.
9. There is no pericardial effusion or intracardiac mass seen.
10. There is a color Doppler suggestive of a patent foramen ovale with lipomatous hypertrophy of the interatrial septum.
11. The study was somewhat technically limited and hence subtle abnormalities could be missed from the study.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
text='''
CC: Left hand numbness on presentation; then developed lethargy later that day.
HX: On the day of presentation, this 72 y/o RHM suddenly developed generalized weakness and lightheadedness, and could not rise from a chair. Four hours later he experienced sudden left hand numbness lasting two hours. There were no other associated symptoms except for the generalized weakness and lightheadedness. He denied vertigo.
He had been experiencing falling spells without associated LOC up to several times a month for the past year.
MEDS: procardia SR, Lasix, Ecotrin, KCL, Digoxin, Colace, Coumadin.
PMH: 1)8/92 evaluation for presyncope (Echocardiogram showed: AV fibrosis/calcification, AV stenosis/insufficiency, MV stenosis with annular calcification and regurgitation, moderate TR, Decreased LV systolic function, severe LAE. MRI brain: focal areas of increased T2 signal in the left cerebellum and in the brainstem probably representing microvascular ischemic disease.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
###Output
_____no_output_____ |
content/lessons/06/Class-Coding-Lab/CCL-Functions.ipynb | ###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card=input("enter the first digit of the credit card: ")
ident=card[0]
if ident == "5":
print("your card is American Express")
elif ident == "4":
print("your card is a VISA")
elif ident == "5":
print(" your card is a Master Card")
elif ident == "6":
print("your card is a Discover card")
else:
print("invalid card")
###Output
enter the first digit of the credit card: 7
invalid card
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
ident=card[0]
if ident == '4':
issuer="VISA"
elif ident == '5':
issuer="Master Card"
elif ident == '6':
issuer="Discover card"
elif ident == '3':
issuer="American Express"
else:
issuer="invalid card"
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover")
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express")
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card")
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
5
Master Card
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL Master Card
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card
4
VISA
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL VISA
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
def CardIssuer(card):
ident=card[0]
if ident == '4':
issuer="VISA"
elif ident == '5':
issuer="Master Card"
elif ident == '6':
issuer="Discover card"
elif ident == '3':
issuer="American Express"
else:
issuer="invalid card"
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True")
print("WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False")
###Output
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True
WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
while True:
credit_card = (input("input a credit card number: "))
if credit_card == 'quit':
break
###Output
input a credit card number: 101
input a credit card number: 345t67788
input a credit card number: u479474384797192
input a credit card number: rcniq3urb8rbv8y21bv0v
input a credit card number: r3ybf7b7bvrb4rv8yb3081bfb
input a credit card number: quit
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input("Enter credit card number:")
digit = int(card[0])
if digit == 4:
issuer = "Visa"
elif digit == 5:
issuer = "MasterCard"
elif digit == 6:
issuer = "Discover"
elif digit == 3:
issuer = "American Express"
else:
issuer = "Invalid"
print(issuer)
###Output
Enter credit card number:4567876543
Visa
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
digit = int(card[0])
if digit == 4:
issuer = "Visa"
elif digit == 5:
issuer = "MasterCard"
elif digit == 6:
issuer = "Discover"
elif digit == 3:
issuer = "American Express"
else:
issuer = "Invalid"
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL", CardIssuer("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL MasterCard
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL American Express
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL Invalid
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = input("Enter credit card number:")
CardIssuer(card)
###Output
Enter credit card number:5678765432
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL", checkLuhn("5443713204330437"))
print("WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ACTUAL", checkLuhn("5111111111111111"))
###Output
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL True
WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ACTUAL False
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
while True:
if card != "quit":
card = input("Enter credit card number:")
if card == "quit":
break
if checkLuhn(card) == True:
CardIssuer(card)
print(issuer)
else:
print("Invalid card")
###Output
Enter credit card number:5443713204330437
Visa
Enter credit card number:quit
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input("enter credit card numbers")
digit = int(card[0])
if digit == 4:
isseur = "Visa"
elif digit == 5:
isseur = "Mastercard"
elif digit == 6:
isseur = "Discover"
elif digit == 3:
issuer = "American Express"
else:
issuer = "Invalid"
print(issuer)
###Output
_____no_output_____
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
if digit == 4:
isseur = "Visa"
elif digit == 5:
isseur = "Mastercard"
elif digit == 6:
isseur = "Discover"
elif digit == 3:
issuer = "American Express"
else:
issuer = "Invalid"
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))`
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return invalid card ACTUAL", CardIssuer("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
_____no_output_____
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = input("enter credit card number")
cardissuer(card)
###Output
_____no_output_____
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL", checkLuhn ("5443713204330437"))
print("WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ACTUAL", checkLuhn("5111111111111111"))
###Output
_____no_output_____
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
while true
if card != "quit"
card = input("enter credit card number")
if card == "quit":
break
if checkLuhn(card) == True:
cardIssuer(card)
print(issuer)
else:
print("invalid card")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(5)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
ccard = input("Please enter your credit card number: ")
digit = ccard[0]
if digit == '4':
card_issuer = "Visa"
elif digit == '5':
card_issuer = "MasterCard"
elif digit == '6':
card_issuer = "Discover"
elif digit == '3':
card_issuer = "American Express"
else:
card_issuer = "Invalid"
print(card_issuer)
###Output
Please enter your credit card number: 4215678
Visa
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == '4':
card_issuer = "Visa"
elif digit == '5':
card_issuer = "MasterCard"
elif digit == '6':
card_issuer = "Discover"
elif digit == '3':
card_issuer = "American Express"
else:
card_issuer = "Invalid"
# the last line in the function should return the output
return card_issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL:", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL:", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid ACTUAL:", CardIssuer("90123456789"))
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL MasterCard
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL: Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL: American Express
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid ACTUAL: Invalid
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
ccard = input("Please enter your credit card number: ")
card_issuer = CardIssuer(ccard)
print(card_issuer)
###Output
Please enter your credit card number: 1234567
Invalid
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("When card='4716511919678261' We EXPECT checkLuhn(card) to return True. ACTUAL: %s" % checkLuhn('4716511919678261'))
print("When card='4222222222222222' We EXPECT checkLuhn(card) to return False. ACTUAL: %s" % checkLuhn('4222222222222222'))
###Output
_____no_output_____
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
while True:
ccard = input("Enter your credit card number or'quit'to end the program. ")
if ccard == 'quit':
break
try:
if checkLuhn(ccard) == True:
issuer = CardIssuer(ccard)
print(issuer)
else:
print("Invalid card. Try again.")
except:
print("Please enter a number. Try again.")
###Output
Enter your credit card number or'quit'to end the program. qwqwewq
Please enter a number. Try again.
Enter your credit card number or'quit'to end the program. 601139199679300
Invalid card. Try again.
Enter your credit card number or'quit'to end the program. 4916741594665261
Visa
Enter your credit card number or'quit'to end the program. quit
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Cat Problem**You want to buy 3 cats from a pet store that has 50 cats. In how many ways can you do this?**This is a classic application in the area of mathematics known as *combinatorics* which is the study of objects belonging to a finite set in accordance with certain constraints.In this example the set is 50 cats, where we select 3 of those 50 cats and the order in which we select them does not matter. We want to know how many different combinations of 3 cats can we get from the 50.This problem, written as a program would work like this:```How many cats are at the pet store? 50How many are you willing to take home? 3There are different combinations of 3 cats from the 50 you can choose to take home!```Of course `` gets replaced with the answer, but we don't know how to do that....yet. Combinatorics 101In *combinatorics*:- a **permutation** defined as `P(n,k)` is the number of ordered arrangements of `n` things taken `k` at a time.- a **combination** defined as `C(n,k)` is the number of un-ordered arrangements of `n` things taken `k` at a time.In our cat case we're bringing 3 (`k`) home from the 50 (`n`) and their order doesn't matter, (after all we plan on loving them equally) so we want **combination** instead of **permutation**. An example of permutation would be if those same cats were in a beauty contest and the 3 (`k`) were to be placed in 1st, 2nd and 3rd. Formula for C(n,k) The formula for `C(n,k)` is as follows: `n! / ((n-k)! * k!) ` we will eventually write this as a user-defined Python function, but before we do, what exactly is `!` ? FactorialThe `!` is not a Python symbol, it is a mathematical symbol. It represents **factorial** defined as `n!` as the the product of the positive integer `n` and all the positive integers less than `n`. Furthermore `0! == 1`.Example: `5! == 5*4*3*2*1 == 120` We are ready to write our program!Our cat problem needs the combination formula, the combination formula needs factorial. We now know everything we need to solve the problem. We just have to assemble it all into a working program! You could solve this problem by writing a user-defined Python function for factorial, then another function for combination. Instead, we'll take a hybrid approach, using the factorial function from the Python standard library and writing a user-defined combination function. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #should be 120
math.factorial(0) # should be 1
###Output
_____no_output_____
###Markdown
Next we need to write a user-defined function for the **combination** formula. Recall:`combination(n,k)` is defined as `n! / ((n-k)! * k!)` use `math.factorial()` in place of `!` in the formula. For example `(n-k)!` would be `math.factorial(n-k)` in Python.
###Code
#TODO: Write code to define the combination(n,k) function here:
def combination(n, k):
return int(math.factorial(n) / (math.factorial((n-k)) * math.factorial(k)))
## Test your combination function here
print(combination(50,3)) # should be 19600
print(combination(4,1)) # should be 4
###Output
4
###Markdown
Now write the entire programSample run```How many cats are at the pet store? 50How many are you willing to take home? 3There are different combinations of 3 cats from the 50 you can choose to take home!```TO-Do List:``` TODO List for program1. input how many cats at pet store? save in variable n2. input how many you are willing to take home? sabe in variable k3. compute combination of n and k4. print results```
###Code
n = int(input("How many cats are at the pet store? "))
k = int(input("How many cats are you willing to take home? "))
print("There are %d different combinations of %d cats from the %d you can choose to take home!" %(combination(n,k), k, n))
###Output
How many cats are at the pet store? 50
How many cats are you willing to take home? 3
There are 19600 different combinations of 3 cats from the 50 you can choose to take home!
###Markdown
The Cat Beauty ContestWe made mention of a cat beauty pagent, where order does matter, would use the **permutation** formula. Do the following:1. Write a function `permutation(n,k)` in Python to implement the permutation formula2. Write a main program similar to the one you wrote above, but instead implements the cat beauty contest.``` TODO List for program1. print "welcome to the cat beauty contest"2. input how many cat contenstents? save input into variable n3. how many places? save input into variable k4. compute permutation(n,k)5. print number of possible ways the contest can end.```
###Code
def permutation(n, k):
return int(math.factorial(n) / (math.factorial((n-k))))
print("Welcome to the cat beauty contest")
n = int(input("How many cat contestant? "))
k = int(input("How many places? "))
print("There are %d possible ways the contest can end" % (permutation(n,k)))
###Output
welcome to the cat beauty contest
how many cat contestant? 50
how many places? 3
There are 117600 possible ways the contest can end
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
import math
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
import math
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
import math
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = (input("Enter credit card number: "))
digit = card[0]
if digit == '4':
issuer = "Visa"
elif digit == '5':
issuer = "MasterCard"
elif digit == '6':
issuer = "DiscoverCard"
elif digit == '3':
issuer = "American Express"
else:
issuer = "Invalid Card"
print("Your cardholder is",issuer)
card = (input("Enter credit card number: "))
digit = card[0]
if digit == '4':
issuer = "Visa"
elif digit == '5':
issuer = "MasterCard"
elif digit == '6':
issuer = "DiscoverCard"
elif digit == '3':
issuer = "American Express"
else:
issuer = "Invalid Card"
print("Your cardholder is",issuer)
card = (input("Enter credit card number: "))
digit = card[0]
if digit == '4':
issuer = "Visa"
elif digit == '5':
issuer = "MasterCard"
elif digit == '6':
issuer = "DiscoverCard"
elif digit == '3':
issuer = "American Express"
else:
issuer = "Invalid Card"
print("Your cardholder is",issuer)
card = (input("Enter credit card number: "))
digit = card[0]
if digit == '4':
issuer = "Visa"
elif digit == '5':
issuer = "MasterCard"
elif digit == '6':
issuer = "DiscoverCard"
elif digit == '3':
issuer = "American Express"
else:
issuer = "Invalid Card"
print("Your cardholder is",issuer)
card = (input("Enter credit card number: "))
digit = card[0]
if digit == '4':
issuer = "Visa"
elif digit == '5':
issuer = "MasterCard"
elif digit == '6':
issuer = "DiscoverCard"
elif digit == '3':
issuer = "American Express"
else:
issuer = "Invalid Card"
print("Your cardholder is",issuer)
###Output
Enter credit card number: 90994023949802
Your cardholder is Invalid Card
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == '4':
issuer = "Visa"
print("Your cardholder is",issuer)
elif digit == '5':
issuer = "MasterCard"
print("Your cardholder is",issuer)
elif digit == '6':
issuer = "DiscoverCard"
print("Your cardholder is",issuer)
elif digit == '3':
issuer = "American Express"
print("Your cardholder is",issuer)
else:
issuer = "Invalid Card"
print("Your cardholder is",issuer)
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card= '40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card= '50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
print("WHEN card= '60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card= '30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer("30123456789"))
print("WHEN card= '90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL", CardIssuer("90123456789"))
###Output
Your cardholder is Visa
WHEN card= '40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
Your cardholder is MasterCard
WHEN card= '50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL MasterCard
Your cardholder is DiscoverCard
WHEN card= '60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL DiscoverCard
Your cardholder is American Express
WHEN card= '30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL American Express
Your cardholder is Invalid Card
WHEN card= '90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL Invalid Card
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = (input("Enter credit card number: "))
digit = card[0]
CardIssuer(card)
# TODO Re-write the program here, calling our function.
card = (input("Enter credit card number: "))
digit = card[0]
CardIssuer(card)
# TODO Re-write the program here, calling our function.
card = (input("Enter credit card number: "))
digit = card[0]
CardIssuer(card)
# TODO Re-write the program here, calling our function.
card = (input("Enter credit card number: "))
digit = card[0]
CardIssuer(card)
# TODO Re-write the program here, calling our function.
card = (input("Enter credit card number: "))
digit = card[0]
CardIssuer(card)
###Output
Enter credit card number: 1292472395
Your cardholder is Invalid Card
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL", checkLuhn('4556784452501223'))
print("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL", checkLuhn('4473274777777777'))
print("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL", checkLuhn('5443713204330437'))
###Output
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL True
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL False
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL True
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
while True:
card = input("Enter your credit card number, or type quit.")
if card == 'quit':
print("End of module")
break
if checkLuhn(card) == True:
CardIssuer(card)
else:
print("Invalid card.")
while True:
card = input("Enter your credit card number, or type quit.")
if card == 'quit':
print("End of module")
break
if checkLuhn(card) == True:
CardIssuer(card)
else:
print("Invalid card.")
while True:
card = input("Enter your credit card number, or type quit.")
if card == 'quit':
print("End of module")
break
if checkLuhn(card) == True:
CardIssuer(card)
else:
print("Invalid card.")
while True:
card = input("Enter your credit card number, or type quit.")
if card == 'quit':
print("End of module")
break
if checkLuhn(card) == True:
CardIssuer(card)
else:
print("Invalid card.")
while True:
card = input("Enter your credit card number, or type quit.")
if card == 'quit':
print("End of module")
break
if checkLuhn(card) == True:
CardIssuer(card)
else:
print("Invalid card.")
###Output
Enter your credit card number, or type quit.539205923875803750357
Invalid card.
Enter your credit card number, or type quit.quit
End of module
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card=input("enter first digit of credit card:")
ident=card[0]
if ident == "3":
print("issuer is American Express")
elif ident == "4":
print("issuer is Visa")
elif ident == "5":
print("issuer is Master Card")
elif ident =="6":
print("issuer is Discover")
else:
print("invalid issuer")
###Output
enter first digit of credit card:3
issuer is American Express
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code. def CardIssuer(card): '''This function takes a card number (card) as input, and returns the issuer name as output''' TODO write code here they should be the same as lines 3-13 from the code above the last line in the function should return the output return issuer Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
def CardIssuer(card):
ident=card[0]
if ident == "3":
issuer = ("American Express")
elif ident == "4":
issuer = ("Visa")
elif ident == "5":
issuer = ("Master Card")
elif ident =="6":
issuer = ("Discover")
else:
issuer = ("invalid issuer")
return issuer
###Output
_____no_output_____
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return Discover", CardIssuer("30123456789"))
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL Master Card
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return Discover American Express
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
info=input("enter credit card:")
checkLuhn(info)
###Output
enter credit card:5443713204330437
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
def checkLuhn(card):
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
def CardIssuer(card):
ident=card[0]
if ident == "3":
issuer = ("American Express")
elif ident == "4":
issuer = ("Visa")
elif ident == "5":
issuer = ("Master Card")
elif ident =="6":
issuer = ("Discover")
else:
issuer = ("invalid issuer")
return issuer
while True:
v=input("enter a card or quit:")
if v=='quit':
break
if checkLuhn(v):
print("Card comes from ",CardIssuer(v))
else:
print("invalid card")
###Output
enter a card or quit:5443713204330437
Card comes from Master Card
enter a card or quit:46656278472738237
invalid card
enter a card or quit:quit
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Cat Problem**You want to buy 3 cats from a pet store that has 50 cats. In how many ways can you do this?**This is a classic application in the area of mathematics known as *combinatorics* which is the study of objects belonging to a finite set in accordance with certain constraints.In this example the set is 50 cats, where we select 3 of those 50 cats and the order in which we select them does not matter. We want to know how many different combinations of 3 cats can we get from the 50.This problem, written as a program would work like this:```How many cats are at the pet store? 50How many are you willing to take home? 3There are different combinations of 3 cats from the 50 you can choose to take home!```Of course `` gets replaced with the answer, but we don't know how to do that....yet. Combinatorics 101In *combinatorics*:- a **permutation** defined as `P(n,k)` is the number of ordered arrangements of `n` things taken `k` at a time.- a **combination** defined as `C(n,k)` is the number of un-ordered arrangements of `n` things taken `k` at a time.In our cat case we're bringing 3 (`k`) home from the 50 (`n`) and their order doesn't matter, (after all we plan on loving them equally) so we want **combination** instead of **permutation**. An example of permutation would be if those same cats were in a beauty contest and the 3 (`k`) were to be placed in 1st, 2nd and 3rd. Formula for C(n,k) The formula for `C(n,k)` is as follows: `n! / ((n-k)! * k!) ` we will eventually write this as a user-defined Python function, but before we do, what exactly is `!` ? FactorialThe `!` is not a Python symbol, it is a mathematical symbol. It represents **factorial** defined as `n!` as the the product of the positive integer `n` and all the positive integers less than `n`. Furthermore `0! == 1`.Example: `5! == 5*4*3*2*1 == 120` We are ready to write our program!Our cat problem needs the combination formula, the combination formula needs factorial. We now know everything we need to solve the problem. We just have to assemble it all into a working program! You could solve this problem by writing a user-defined Python function for factorial, then another function for combination. Instead, we'll take a hybrid approach, using the factorial function from the Python standard library and writing a user-defined combination function. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #should be 120
math.factorial(0) # should be 1
###Output
_____no_output_____
###Markdown
Next we need to write a user-defined function for the **combination** formula. Recall:`combination(n,k)` is defined as `n! / ((n-k)! * k!)` use `math.factorial()` in place of `!` in the formula. For example `(n-k)!` would be `math.factorial(n-k)` in Python.
###Code
#TODO: Write code to define the combination(n,k) function here:
## Test your combination function here
combination(50,3) # should be 19600
combination(4,1) # should be 4
###Output
_____no_output_____
###Markdown
Now write the entire programSample run```How many cats are at the pet store? 50How many are you willing to take home? 3There are different combinations of 3 cats from the 50 you can choose to take home!```TO-Do List:``` TODO List for program1. input how many cats at pet store? save in variable n2. input how many you are willing to take home? sabe in variable k3. compute combination of n and k4. print results```
###Code
# TODO: Write entire program
###Output
_____no_output_____
###Markdown
The Cat Beauty ContestWe made mention of a cat beauty pagent, where order does matter, would use the **permutation** formula. Do the following:1. Write a function `permutation(n,k)` in Python to implement the permutation formula2. Write a main program similar to the one you wrote above, but instead implements the cat beauty contest.``` TODO List for program1. print "welcome to the cat beauty contest"2. input how many cat contenstents? save input into variable n3. how many places? save input into variable k4. compute permutation(n,k)5. print number of possible ways the contest can end.```
###Code
# TODO Write function and program here
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title = ("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal = ("The area is %.2f" % area)
print(area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
first_digit = int(input("Enter the first digit: "))
if first_digit == 4:
print('the card issuer is Visa')
elif first_digit == 5:
print('the card issuer is Mastercard')
elif first_digit == 6:
print('the card issuer is Discover')
elif first_digit == 3:
print('the card issuer is American Express')
else:
print("the issuer is invalid")
###Output
Enter the first digit: 5
the card issuer is Mastercard
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
first_digit = int(input("Enter the first digit: "))
if first_digit == 4:
print('the card issuer is Visa')
elif first_digit == 5:
print('the card issuer is Mastercard')
elif first_digit == 6:
print('the card issuer is Discover')
elif first_digit == 3:
print('the card issuer is American Express')
else:
print("the issuer is invalid")
###Output
Enter the first digit: 2
the issuer is invalid
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
CardIssuer = input("Enter the card number: ")
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card", CardIssuer("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
Enter the card number: 40123456789
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
CardIssuer = int(input("Enter the first digit: "))
if CardIssuer == 40123456789:
print('the card issuer is Visa')
elif CardIssuer == 50123456789:
print('the card issuer is Mastercard')
elif CardIssuer == 60123456789:
print('the card issuer is Discover')
elif CardIssuer == 30123456789:
print('the card issuer is American Express')
else:
print("the issuer is invalid")
###Output
Enter the first digit: 40123456789
the card issuer is Visa
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
WHEN_card = input("enter your credit card value: ")
if WHEN_card == 5443713204330437:
print("We EXPECT checkLuhn(card) to return True")
if WHEN_card == 5111111111111111:
print("We Excpect checkLuhn(card) to return False")
else:
print("Not a valid credit card number ")
###Output
enter your credit card value: 54
Not a valid credit card number
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program! print_title("Area of a Rectangle")length = float(input("Enter length: "))width = float(input("Enter width: "))area = length * widthprint_normal("The area is %.2f" % area) Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input("Enter your card number. ")
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "MasterCard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
print("Invalid input. ")
print(issuer)
###Output
Enter your card number. 5471
MasterCard
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit=card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "MasterCard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
print("Invalid input. ")
print(issuer)
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card", CardIssuer("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
Visa
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
MasterCard
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL MasterCard
Discover
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL Discover
American Express
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL American Express
Invalid input.
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = input("Enter your card number. ")
issuer=CardIssuer(card)
print("That card was issued by ", issuer)
###Output
Enter your card number. 45677
Visa
That card was issued by Visa
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
cardnum = float(input("Enter Luhn card number. "))
total=checkLuhn()
if total == 5443713204330437:
return True
elif total == 5111111111111111:
return False
###Output
_____no_output_____
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
ccn = input("Enter Credit Card number... or type quit to stop. "))
if ccn == "quit"
break
elif checkLuhn == True
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
m = int(input("Enter a number to use the factorial function on: "))
math.factorial(m)
###Output
Enter a number to use the factorial function on: 4
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
n = int(input("Enter the first number of your credit card: "))
if n == 4:
print("The card issuer is Visa")
elif n == 5:
print("The card issuer is MasterCard")
elif n == 6:
print("The card issuer is Discover")
elif n == 3:
print("The card issuer is American Express")
else:
print("The issuer is Invalid")
###Output
Enter the first number of your credit card: 7
The issuer is Invalid
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
n = card[0]
if n == '4':
issuer="visa"
elif n == '5':
issuer="MasterCard"
elif n == '6':
issuer=" Discover"
elif n == '3':
issuer = "American Express"
else:
issuer = "Invalid"
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL", CardIssuer("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
Enter the first number of your credit card: 40123456789
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL None
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
n = input("Enter your credit card number: ")
issuer = CardIssuer(n)
print("Card issued by ",issuer)
###Output
Enter your credit card number: 40123456789
Card issued by visa
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
c = input("Enter your credit card number: ")
TF=(checkLuhn(c))
print("WHEN card = '5443713204330437' We EXPECT checkLuhn(card) to return ",TF)
if checkLuhn(c):
print ("valid card number")
else:
print("invalid number")
###Output
Enter your credit card number: 5443713204330437
WHEN card = '5443713204330437' We EXPECT checkLuhn(card) to return True
valid card number
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code herequit
while True:
c = input("Enter your credit card number or quit: ")
if c == 'quit':
break
if checkLuhn(c):
issuer=CardIssuer(c)
print ("That card was issued by ",issuer)
else:
print("Not a valid card number")
###Output
Enter your credit card number or quit: 40123456789
Not a valid card number
Enter your credit card number or quit: quit
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
print("This isn't a scam, I swear")
card=(input(print("Enter your CC number")))
digit = card[0]
if digit == "4":
print("The card issuer is Visa")
elif digit == "6":
print("The card issuer is Discover")
elif digit == "5":
print("The care issuer is Mastercard")
elif digit == "3":
print("The card issuer is American Express")
else:
print("Invalid Card #")
###Output
This isn't a scam, I swear
Enter your CC number
None1
Invalid Card #
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
_____no_output_____
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
###Output
_____no_output_____
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
###Output
_____no_output_____
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
# what if argument and variable the same?
def AreaOfRectangle(length,width):
area = length*width
#does the operator work if its string?
length = "ab"
width = "dc"
return area
AreaOfRectangle(4,4)
areaofrectangle(length,width)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
def card (number,issuer):
digit = list(input("what is your credit card number? ")
bank =
#error
def card (number, issuer):
digit = 0
bank = 0
statement = digit, bank
return
#error
return answer
print_title("Input credit card number into variable card")
print_normal("get the first digit of the number")
## TODO: Write your code here
card = input("what is the first number of the credit card? ")
digit = card[0]
if card == '4':
print ("The card issuer is Visa")
elif card =='5':
print("the card issuer is MasterCard")
elif number == '6':
print("The card issuer is Discover")
elif number == '3':
print_normal ("the Card issuer is American Express")
else:
print("the issuer is 'invalid' ")
print (card)
###Output
what is the first number of the credit card? 4
The card issuer is Visa
4
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
#the correct version
#card = input("what is the first number of the credit card? ")
def CardIssuer(card):
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == '4':
issuer = "Visa"
elif digit =='5':
issuer = "MasterCard"
elif digit == '6':
issuer = "discover"
elif digit == '3':
issuer = "Amex"
else:
issuer = "issuer is invalid"
# the last line in the function should return the output
return issuer
cardIssedBy = CardIssuer('5536')
print("My card was issued by %s" % (cardIssedBy))
#correct this later
CardIssuer(card)
###Output
the card issuer is MasterCard
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
CardIssuer(card)
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###what does this mean in human terms?
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL",CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL",CardIssuer("90123456789"))
#unit test google
###Output
The card issuer is Visa
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL None
the card issuer is MasterCard
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL None
The card issuer is Discover
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL None
the Card issuer is American Express
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL None
the issuer is 'invalid'
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL None
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
issuer = input("what is the first number of the credit card? ")
def CardIssuer(card):
issuer = 0
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == '4':
print ("The card issuer is Visa")
elif digit =='5':
print("the card issuer is MasterCard")
elif digit == '6':
print("The card issuer is Discover")
elif digit == '3':
print ("the Card issuer is American Express")
else:
print("the issuer is 'invalid' ")
# the last line in the function should return the output
return issuer
#im still confused what to return to
def CardIssuer(cardNum):
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == '4':
print ("The card issuer is Visa")
elif digit =='5':
print("the card issuer is MasterCard")
elif digit == '6':
print("The card issuer is Discover")
elif digit == '3':
print ("the Card issuer is American Express")
else:
print("the issuer is 'invalid' ")
# the last line in the function should return the output
return digit
print(CardIssuer("123"))
print(CardIssuer("423"))
print(CardIssuer("523"))
CardIssuer(card)
###Output
_____no_output_____
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
print ("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True")
print("WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False")
###Output
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True
WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
#fail ver
while True:
card = input("input a credit card number: ")
if card == 'quit':
break
elif checkLuhn(card) == True:
print ("my card was issued by %s"%(cardIssedBy))
else:
return issuer
#avi recitation ver
def getCardIssuer(cardNum):
fDigit = cardNum[0]
cardIssuer = ""
if fDigit == '4':
cardIssuer = "Visa"
elif fDigit =="5":
cardIssuer = "MasterCard"
elif fDigit == "6":
cardIssuer = "Discover"
elif fDigit == "3":
cardIssuer = "AmericanExpress"
else:
cardIssuer = "Invalid Card"
return cardIssuer
## TODO Write code here
# the last line in the function should return the output
#ignore this
while True:
digit = input("")
if checkLuhn(fDigit) == True:
cardIssuer = CardIssuer (cardNum)
print("Your card was issued by %s"(cardIssuer))
elif fDigit =="quit":
break
else:
print ("Does not look like a valid card")
# print("WHEN card='4485961198045899' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("4485961198045899"))
# print("WHEN card='5101138799496638' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("5101138799496638"))
# print("WHEN card='6011979189197329' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("6011979189197329"))
# print("WHEN card='379154137500177' We EXPECT CardIssuer(card) to return American Express ACTUAL",CardIssuer("379154137500177"))
#else:
# print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL",CardIssuer("90123456789"))
#4916074048047310
print (getCardIssuer("123"))
print(getCardIssuer("345"))
print(getCardIssuer("567"))
#final
def CardIssuer(card):
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == '4':
issuer = "Visa"
elif digit =='5':
issuer = "MasterCard"
elif digit == '6':
issuer = "discover"
elif digit == '3':
issuer = "Amex"
else:
issuer = "issuer is invalid"
# the last line in the function should return the output
return issuer
cardIssedBy = CardIssuer('5536')
print("My card was issued by %s" % (cardIssedBy))
#final
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
#finally yay
#this is the final version
while True:
card = input("input a credit card number: ")
if card == 'quit':
break
try:
if checkLuhn(card) == True:
#does line 10 mean calling the function CardIssuer and assign it to variable issuer?
issuer = CardIssuer(card)
print("my card was issued by %s"%(issuer))
else:
print("input is invalid. Please input a valid card number")
except:
print("input is invalid. Please input a valid card number")
###Output
input a credit card number: input is invalid. Please input a valid card number
input is invalid. Please input a valid card number
input a credit card number: lalal
input is invalid. Please input a valid card number
input a credit card number: abc
input is invalid. Please input a valid card number
input a credit card number: yay
input is invalid. Please input a valid card number
input a credit card number: lol
input is invalid. Please input a valid card number
input a credit card number: wtever
input is invalid. Please input a valid card number
input a credit card number: 6011778957147528
my card was issued by discover
input a credit card number: 378325546795381
my card was issued by Amex
input a credit card number: 4024007175578920
my card was issued by Visa
input a credit card number: 5187085663497043
my card was issued by MasterCard
input a credit card number: quit
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input("We need to check your credentials. Please enter your card number: ")
digit = card[0]
if (digit == '4'):
print ("The card is a Visa.")
elif (digit == '5'):
print ("The card is a Mastercard.")
elif (digit == '6'):
print ("The card is a Discover.")
elif (digit == '3'):
print ("The card is an American Express.")
else:
print ("Invalid Card. Please try another.")
###Output
We need to check your credentials. Please enter your card number: 08
Invalid Card. Please try another.
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
##card = str(input("We need to check your credentials. Please enter your card number: "))
digit = card[0]
if (digit == '4'):
issuer = "Visa"
elif (digit == '5'):
issuer = "MasterCard"
elif (digit == '6'):
issuer = "Discover"
elif (digit == '3'):
issuer = "American Express"
else:
issuer = "Invalid"
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card", CardIssuer("90123456789"))
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL MasterCard
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express American Express
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card Invalid
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = str(input("We need to check your credentials. Please enter your card number: "))
print ("The card issuer is ",CardIssuer(card))
###Output
We need to check your credentials. Please enter your card number: 40123456789
The card issuer is Visa
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
card = str(input("We need to check your credentials. Please enter your card number: "))
print ("Is the card valid? ",checkLuhn(card))
###Output
We need to check your credentials. Please enter your card number: 5443713204330437
Is the card valid? True
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
counter = 0
while True:
card = str(input("We need to check your credentials. Please enter your card number: "))
##if (card == 'quit'):
if (checkLuhn(card) == 'True'):
print ("The card issuer is ",CardIssuer(card))
else:
print ("Invalid Card.")
###Output
We need to check your credentials. Please enter your card number: 5678
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Cat Problem**You want to buy 3 cats from a pet store that has 50 cats. In how many ways can you do this?**This is a classic application in the area of mathematics known as *combinatorics* which is the study of objects belonging to a finite set in accordance with certain constraints.In this example the set is 50 cats, where we select 3 of those 50 cats and the order in which we select them does not matter. We want to know how many different combinations of 3 cats can we get from the 50.This problem, written as a program would work like this:```How many cats are at the pet store? 50How many are you willing to take home? 3There are different combinations of 3 cats from the 50 you can choose to take home!```Of course `` gets replaced with the answer, but we don't know how to do that....yet. Combinatorics 101In *combinatorics*:- a **permutation** defined as `P(n,k)` is the number of ordered arrangements of `n` things taken `k` at a time.- a **combination** defined as `C(n,k)` is the number of un-ordered arrangements of `n` things taken `k` at a time.In our cat case we're bringing 3 (`k`) home from the 50 (`n`) and their order doesn't matter, (after all we plan on loving them equally) so we want **combination** instead of **permutation**. An example of permutation would be if those same cats were in a beauty contest and the 3 (`k`) were to be placed in 1st, 2nd and 3rd. Formula for C(n,k) The formula for `C(n,k)` is as follows: `n! / ((n-k)! * k!) ` we will eventually write this as a user-defined Python function, but before we do, what exactly is `!` ? FactorialThe `!` is not a Python symbol, it is a mathematical symbol. It represents **factorial** defined as `n!` as the the product of the positive integer `n` and all the positive integers less than `n`. Furthermore `0! == 1`.Example: `5! == 5*4*3*2*1 == 120` We are ready to write our program!Our cat problem needs the combination formula, the combination formula needs factorial. We now know everything we need to solve the problem. We just have to assemble it all into a working program! You could solve this problem by writing a user-defined Python function for factorial, then another function for combination. Instead, we'll take a hybrid approach, using the factorial function from the Python standard library and writing a user-defined combination function. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #should be 120
math.factorial(0) # should be 1
###Output
_____no_output_____
###Markdown
Next we need to write a user-defined function for the **combination** formula. Recall:`combination(n,k)` is defined as `n! / ((n-k)! * k!)` use `math.factorial()` in place of `!` in the formula. For example `(n-k)!` would be `math.factorial(n-k)` in Python.
###Code
#TODO: Write code to define the combination(n,k) function here:
n=0
k=0
def combination(n,k):
combination = (math.factorial(n)) / ((math.factorial(n-k)) * (math.factorial(k)))
return int(combination)
## Test your combination function here
combination(50,3) # should be 19600
combination(4,1) # should be 4
###Output
_____no_output_____
###Markdown
Now write the entire programSample run```How many cats are at the pet store? 50How many are you willing to take home? 3There are different combinations of 3 cats from the 50 you can choose to take home!```TO-Do List:``` TODO List for program1. input how many cats at pet store? save in variable n2. input how many you are willing to take home? sabe in variable k3. compute combination of n and k4. print results```
###Code
# TODO: Write entire program
n = int(input("How many cats at pet store?: "))
k = int(input("How many are you willing to take home?: "))
num_of_combinations = combination(n,k)
print('There are', num_of_combinations, 'different combination of', k, 'cats from the', n, 'you can choose to take home!')
###Output
How many cats at pet store?: 50
How many are you willing to take home?: 3
There are 19600 different combination of 3 cats from the 50 you can choose to take home!
###Markdown
The Cat Beauty ContestWe made mention of a cat beauty pagent, where order does matter, would use the **permutation** formula. Do the following:1. Write a function `permutation(n,k)` in Python to implement the permutation formula2. Write a main program similar to the one you wrote above, but instead implements the cat beauty contest.``` TODO List for program1. print "welcome to the cat beauty contest"2. input how many cat contenstents? save input into variable n3. how many places? save input into variable k4. compute permutation(n,k)5. print number of possible ways the contest can end.```
###Code
# TODO: Write entire program
import math
def permutation(n,k):
permutations = math.factorial(n) / math.factorial((n-k))
return permutations
permutations = int(permutation(n,k))
print("Welcome to the cat beauty contest")
n = int(input("How many cat contestents? "))
k = int(input("How many places? "))
print("Number of possible ways the contest can end: ", permutations)
###Output
Welcome to the cat beauty contest
How many cat contestents? 20
How many places? 2
Number of possible ways the contest can end: 380
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
import math
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input("Enter your credit card number: ")
digit = card[0]
issuer = 0
if digit == '4':
issuer = 'Visa'
elif digit == '5':
issuer = 'Master Card'
elif digit == '6':
issuer = 'Discover'
elif digit == '3':
issuer = 'American Express'
else:
issuer = 'Invalid'
print(issuer)
###Output
Visa
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
card = input("Enter your credit card number: ")
if card[0] == '4':
issuer = 'Visa'
elif card[0] == '5':
issuer = 'Mastercard'
elif card[0] == '6':
issuer = 'Discover'
elif card[0] == '3':
issuer = 'AmericanExpress'
else:
issuer = 'Invalid card'
print(str(issuer))
return
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express", CardIssuer("3012456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card", CardIssuer("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
Visa
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa None
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
CardIssuer(card)
###Output
Visa
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True", checkLuhn('5443713204330437'))
print("WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False", checkLuhn('5111111111111111'))
###Output
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True True
WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False False
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
while True:
card = input("Enter a credit card number: ")
if card == 'quit' :
break
if checkLuhn(card) == True:
print(issuer)
else:
print("Invalid card.")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- IS the number potentially valid (as opposed to a made up series of digits)? What does this have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function ceil in module math:
ceil(x, /)
Return the ceiling of x as an Integral.
This is the smallest integer >= x.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output? The output is 24.
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUntil this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = str(input("Enter your credit card number: "))
digit = str(card[0])
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "Mastercard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid"
print ("The card number you entered was from %s" % issuer)
###Output
Enter your credit card number: 4564
The card number you entered was from Visa
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = str(card[0])
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "Mastercard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid"
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here are some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer ("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer ("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL", CardIssuer ("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL Mastercard
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL American Express
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL Invalid
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = str(input("Enter your credit card number: "))
my_card_issuer = CardIssuer(card)
print("The card number you entered is from", my_card_issuer)
###Output
Enter your credit card number: 54651325735
The card number you entered is from Mastercard
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return true ACTUAL", checkLuhn("5443713204330437"))
print("WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return false ACTUAL", checkLuhn("511111111111111"))
###Output
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return true ACTUAL True
WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return false ACTUAL False
###Markdown
Putting it all togetherFinally use your two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = str(card[0])
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "Mastercard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid"
# the last line in the function should return the output
return issuer
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
while True:
card = input(str("Please enter your credit card number or quit to end the program: "))
if card == "quit":
break
check_result = checkLuhn(card)
if check_result == True:
my_card_issuer = CardIssuer(card)
print("The issuer of the card is %s" % my_card_issuer)
else:
print("The card you entered was invalid")
print("The program has ended")
###Output
Please enter your credit card number or quit to end the program: quit
The program has ended
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- IS the number potentially valid (as opposed to a made up series of digits)? What does this have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUntil this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
_____no_output_____
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
###Output
_____no_output_____
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here are some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
_____no_output_____
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
###Output
_____no_output_____
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
###Output
_____no_output_____
###Markdown
Putting it all togetherFinally use your two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
###Output
_____no_output_____
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
_____no_output_____
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
###Output
_____no_output_____
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
###Output
_____no_output_____
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Cat Problem**You want to buy 3 cats from a pet store that has 50 cats. In how many ways can you do this?**This is a classic application in the area of mathematics known as *combinatorics* which is the study of objects belonging to a finite set in accordance with certain constraints.In this example the set is 50 cats, where we select 3 of those 50 cats and the order in which we select them does not matter. We want to know how many different combinations of 3 cats can we get from the 50.This problem, written as a program would work like this:```How many cats are at the pet store? 50How many are you willing to take home? 3There are different combinations of 3 cats from the 50 you can choose to take home!```Of course `` gets replaced with the answer, but we don't know how to do that....yet. Combinatorics 101In *combinatorics*:- a **permutation** defined as `P(n,k)` is the number of ordered arrangements of `n` things taken `k` at a time.- a **combination** defined as `C(n,k)` is the number of un-ordered arrangements of `n` things taken `k` at a time.In our cat case we're bringing 3 (`k`) home from the 50 (`n`) and their order doesn't matter, (after all we plan on loving them equally) so we want **combination** instead of **permutation**. An example of permutation would be if those same cats were in a beauty contest and the 3 (`k`) were to be placed in 1st, 2nd and 3rd. Formula for C(n,k) The formula for `C(n,k)` is as follows: `n! / ((n-k)! * k!) ` we will eventually write this as a user-defined Python function, but before we do, what exactly is `!` ? FactorialThe `!` is not a Python symbol, it is a mathematical symbol. It represents **factorial** defined as `n!` as the the product of the positive integer `n` and all the positive integers less than `n`. Furthermore `0! == 1`.Example: `5! == 5*4*3*2*1 == 120` We are ready to write our program!Our cat problem needs the combination formula, the combination formula needs factorial. We now know everything we need to solve the problem. We just have to assemble it all into a working program! You could solve this problem by writing a user-defined Python function for factorial, then another function for combination. Instead, we'll take a hybrid approach, using the factorial function from the Python standard library and writing a user-defined combination function. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #should be 120
math.factorial(0) # should be 1
###Output
_____no_output_____
###Markdown
Next we need to write a user-defined function for the **combination** formula. Recall:`combination(n,k)` is defined as `n! / ((n-k)! * k!)` use `math.factorial()` in place of `!` in the formula. For example `(n-k)!` would be `math.factorial(n-k)` in Python.
###Code
#TODO: Write code to define the combination(n,k) function here:
def combination(n,k):
combination = math.factorial(n) / ((math.factorial(n-k) * math.factorial(k)))
return combination
## Test your combination function here
def combination(n,k):
combination = math.factorial(n) / ((math.factorial(n-k) * math.factorial(k)))
return combination
result = combination(50,3)
print(result)
# should be 4
def combination(n,k):
combination = math.factorial(n) / ((math.factorial(n-k) * math.factorial(k)))
return combination
result = combination(4,1)
print(result)
###Output
4.0
###Markdown
Now write the entire programSample run```How many cats are at the pet store? 50How many are you willing to take home? 3There are different combinations of 3 cats from the 50 you can choose to take home!```TO-Do List:``` TODO List for program1. input how many cats at pet store? save in variable n2. input how many you are willing to take home? sabe in variable k3. compute combination of n and k4. print results```
###Code
# TODO: Write entire program
n = int(input("How many cats are at the petstore: "))
k = int(input("How many cats are you willing to take home: "))
def combination(n,k):
combination = math.factorial(n) / ((math.factorial(n-k) * math.factorial(k)))
return combination
result = combination(n,k)
print("There are", result, "different combinations of 3 cats from the 50 you can chose to take home!")
###Output
How many cats are at the petstore: 50
How many cats are you willing to take home: 3
There are 19600.0 different combinations of 3 cats from the 50 you can chose to take home!
###Markdown
The Cat Beauty ContestWe made mention of a cat beauty pagent, where order does matter, would use the **permutation** formula. Do the following:1. Write a function `permutation(n,k)` in Python to implement the permutation formula2. Write a main program similar to the one you wrote above, but instead implements the cat beauty contest.``` TODO List for program1. print "welcome to the cat beauty contest"2. input how many cat contenstents? save input into variable n3. how many places? save input into variable k4. compute permutation(n,k)5. print number of possible ways the contest can end.```
###Code
# TODO Write function and program here
print ("Welcome to the cat beauty contest: ")
n = int(input("How many cat contenstenets: "))
k = int(input("How many places: "))
def permutation(n,k):
permutation = math.factorial(n) * math.factorial(n-k)
return permutation
result = permutation(n,k)
print("The contest can end", result, "ways.")
###Output
Welcome to the cat beauty contest:
How many cat contenstenets: 9
How many places: 3
The contest can end 261273600 ways.
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
import math
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input("Enter your credit card number: ")
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "MasterCard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid"
print("%s" % (issuer))
###Output
Enter your credit card number: 5
MasterCard
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "MasterCard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid"
print("%s" % (issuer))
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL", CardIssuer("90123456789"))
###Output
Visa
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
MasterCard
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL MasterCard
Discover
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL Discover
American Express
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL American Express
Invalid
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL Invalid
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = input("Enter your credit card number: ")
CardIssuer(card)
###Output
Enter your credit card number: 457
Visa
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL", checkLuhn("5443713204330437"))
print("WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ACTUAL", checkLuhn("5111111111111111"))
###Output
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True ACTUAL True
WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ACTUAL False
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
while True:
card = input("Enter your credit card number: ")
if card == "quit":
break
if checkLuhn(card):
CardIssuer(card)
break
else:
print("Invalid Card")
break
###Output
Enter your credit card number: 5443713204330437
MasterCard
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input("What is your credit card number? ")
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "MasterCard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issues = "American Express"
else:
print("Invalid card number")
print(issuer)
###Output
What is your credit card number? 50009
MasterCard
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
def CardIssuer(card):
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "MasterCard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid Card Number"
return issuer
# the last line in the function should return the output
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card", CardIssuer("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL MasterCard
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express American Express
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card Invalid Card Number
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = input("Enter your credit card number: ")
CardIssuer(card)
###Output
Enter your credit card number: 40
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
card='5443713204330437'
checkLuhn(card)
###Output
_____no_output_____
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
def CardIssuer(card):
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "MasterCard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid Card Number"
return issuer
card = input("Enter your credit card number: ")
while card != 'quit':
if checkLuhn(card) == True:
print("The Card issuer is ",CardIssuer(card))
else:
exit(0)
card = input("Enter your credit card number: ")
###Output
Enter your credit card number: 5443713204330437
The Card issuer is MasterCard
Enter your credit card number: quit
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input("Enter your credit card number (100% legit no scam): ")
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "MasterCard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid"
print("Issuer:", issuer)
###Output
Enter your credit card number (100% legit no scam): 4
Issuer: Visa
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "MasterCard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid"
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer ("60123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer ("30123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Invalid ACTUAL", CardIssuer ("90123456789"))
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL MasterCard
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL Discover
WHEN card='60123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL American Express
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Invalid ACTUAL Invalid
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = input("Enter your credit card number (100% legit no scam): ")
issuer = CardIssuer(card)
print("Issuer:", issuer)
###Output
Enter your credit card number (100% legit no scam): 34567890
Issuer: American Express
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("WHEN card='30569309025904' We EXPECT checkLuhn(card) to return True. Actual:", checkLuhn("30569309025904"))
print("WHEN card='51930417093408' We EXPECT checkLuhn(card) to return False. Actual:", checkLuhn("51930417093408"))
###Output
WHEN card='30569309025904' We EXPECT checkLuhn(card) to return True. Actual: True
WHEN card='51930417093408' We EXPECT checkLuhn(card) to return False. Actual: False
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
while True:
card = input("Enter your credit card number to get free bitcoin (or type 'quit'): ")
if card == "quit":
break
elif checkLuhn(card) == True:
issuer = CardIssuer(card)
print("Issuer:", issuer)
else:
print ("Invalid card")
###Output
Enter your credit card number to get free bitcoin (or type 'quit'): 5555555555554444
Issuer: MasterCard
Enter your credit card number to get free bitcoin (or type 'quit'): 4111111111111111
Issuer: Visa
Enter your credit card number to get free bitcoin (or type 'quit'): 643278504513245738
Invalid card
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input("Input your credit card number: ")
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "Mastercard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid"
print(issuer)
###Output
Input your credit card number: 67890
Discover
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == "4":
issuer = "Visa"
elif digit == "5":
issuer = "Mastercard"
elif digit == "6":
issuer = "Discover"
elif digit == "3":
issuer = "American Express"
else:
issuer = "Invalid"
return str(issuer)
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card = '60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card = '30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer("30123456789"))
print("WHEN card = '90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL", CardIssuer("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL Mastercard
WHEN card = '60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL Discover
WHEN card = '30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL American Express
WHEN card = '90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL Invalid
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = input("Enter Credit Card Number:")
print(CardIssuer(card))
###Output
Enter Credit Card Number:57890
Mastercard
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
print("WHEN card='5204553091822610' We EXPECT checkLuhn(card) to return True, ACTUAL", checkLuhn('5204553091822610'))
print("WHEN card='373807871437759' We EXPECT checkLuhn(card) to return False, ACTUAL", checkLuhn('9745314937539133943'))
###Output
WHEN card='5204553091822610' We EXPECT checkLuhn(card) to return True, ACTUAL True
WHEN card='373807871437759' We EXPECT checkLuhn(card) to return False, ACTUAL False
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
count = 0
while True:
card = input("enter card number")
count = count + 1
if card == 'quit':
print("quitting program")
break
if checkLuhn(card) == True:
print(CardIssuer(card))
continue
else:
print("invalid card")
continue
###Output
enter card number373807871437759
American Express
enter card number4929184940118112
Visa
enter card number527995684988303
invalid card
enter card number5279956849883034
Mastercard
enter card number6011958172822440
Discover
enter card numberquit
quitting program
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
###Output
_____no_output_____
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
_____no_output_____
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
###Output
_____no_output_____
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
###Output
_____no_output_____
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
Help on built-in function factorial in module math:
factorial(...)
factorial(x) -> Integral
Find x!. Raise a ValueError if x is negative or non-integral.
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(4)
# the answer is 24
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
_____no_output_____
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card= input("I will check your card, enter your credentials: ")
digit = card[0]
if (digit=='4'):
print("Hello Visa user")
elif (digit=='5'):
print("Master of the Mastercard")
elif (digit=='6'):
print ("We've discovered a Discover Card")
elif (digit=='3'):
print("Manifest destiny! It's an American Express")
else:
display(HTML("<h1>Invalid Input!</h1>"))
###Output
I will check your card, enter your credentials: 345678901
Manifest destiny! It's an American Express
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if (digit=='4'):
issuer=' Visa'
elif (digit=='5'):
issuer=' Master'
elif (digit=='6'):
issuer=' Discover'
elif (digit=='3'):
issuer=' American Express'
else:
issuer=' invalid'
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card", CardIssuer("90123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL Master
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL American Express
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card invalid
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
def CardIssuer(card):
card= input("I will check your card, enter your credentials: ")
digit = card[0]
if (digit=='4'):
issuer='Visa'
elif (digit=='5'):
issuer='Master'
elif (digit=='6'):
issuer='Discover'
elif (digit=='3'):
issuer='American_Express'
else:
issuer='invalid'
return issuer
###Output
_____no_output_____
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='`' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
card=input("Enter a card number to check for validity: ")
print ("Here are the results: ",checkLuhn(card))
###Output
Enter a card number to check for validity: 31234567890
Here are the results: False
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
def checkLuhn(card):
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
def CardIssuer(card):
digit = card[0]
if (digit=='4'):
issuer='Visa'
elif (digit=='5'):
issuer='Master'
elif (digit=='6'):
issuer='Discover'
elif (digit=='3'):
issuer='American_Express'
else:
issuer='invalid'
return issuer
while True:
card= input("I will check your card, enter your credentials: ")
if checkLuhn(card)==False:
print("Don't give me a fake card!")
break;
else:
print("You wield the ",CardIssuer(card)," card")
###Output
I will check your card, enter your credentials: 4539100997421904
You wield the Visa card
I will check your card, enter your credentials: 1234567890
Don't give me a fake card!
###Markdown
In-Class Coding Lab: FunctionsThe goals of this lab are to help you to understand:- How to use Python's built-in functions in the standard library.- How to write user-defined functions- The benefits of user-defined functions to code reuse and simplicity.- How to create a program to use functions to solve a complex ideaWe will demonstrate these through the following example: The Credit Card ProblemIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.So there are two things we'd like to figure out, for any "potential" card number:- Who is the issuing network? Visa, MasterCard, Discover or American Express.- In the number potentially valid (as opposed to a made up series of digits)? What does the have to do with functions?If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. Built-In FunctionsLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
###Code
import math
dir(math)
###Output
_____no_output_____
###Markdown
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
###Code
help(math.factorial)
###Output
_____no_output_____
###Markdown
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
###Code
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
###Output
_____no_output_____
###Markdown
Using functions to print things awesome in JuypterUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.For example this prints Hello in Heading 1.
###Code
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
###Output
Exciting:
###Markdown
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code:
###Code
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
###Output
_____no_output_____
###Markdown
Now let's use these two functions in a familiar program!
###Code
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
###Output
###Markdown
Let's get back to credit cards....Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:- Who is the issuing network? Visa, MasterCard, Discover or American Express.This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard".It should be easy to write a program to solve this problem. Here's the algorithm:```input credit card number into variable cardget the first digit of the card number (eg. digit = card[0])if digit equals "4" the card issuer "Visa"elif digit equals "5" the card issuer "MasterCard"elif digit equals "6" the card issuer is "Discover"elif digit equals "3" the card issues is "American Express"else the issuer is "Invalid" print issuer``` Now You Try ItTurn the algorithm into python code
###Code
## TODO: Write your code here
card = input('Enter your credit card number: ')
digit = card[0]
if digit == '3':
issuer = 'American Express'
elif digit == '4':
issuer = 'Visa'
elif digit == '5':
issuer = 'Mastercard'
elif digit == '6':
issuer = 'Discover'
else:
issuer = 'invalid'
print(issuer)
###Output
Enter your credit card number: 5
Mastercard
###Markdown
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. Introducing the Write - Refactor - Test - Rewrite approachIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:1. Write the code2. Refactor (change the code around) to use a function3. Test the function by calling it4. Rewrite the original code to use the new function.We already did step 1: Write so let's move on to: Step 2: refactorLet's strip the logic out of the above code to accomplish the task of the function:- Send into the function as input a credit card number as a `str`- Return back from the function as output the issuer of the card as a `str`To help you out we've written the function stub for you all you need to do is write the function body code.
###Code
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == '3':
issuer = 'American Express'
elif digit == '4':
issuer = 'Visa'
elif digit == '5':
issuer = 'Mastercard'
elif digit == '6':
issuer = 'Discover'
else:
issuer = 'invalid'
# the last line in the function should return the output
return issuer
###Output
_____no_output_____
###Markdown
Step 3: TestYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!Here's some examples:```WHEN card='40123456789' We EXPECT CardIssuer(card) to return VisaWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCardWHEN card='60123456789' We EXPECT CardIssuer(card) to return DiscoverWHEN card='30123456789' We EXPECT CardIssuer(card) to return American ExpressWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card``` Now you Try it!Write the tests based on the examples:
###Code
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL", CardIssuer("90123456789"))
###Output
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL Mastercard
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL American Express
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ACTUAL invalid
###Markdown
Step 4: RewriteThe final step is to re-write the original program, but use the function instead. The algorithm becomes```input credit card number into variable cardcall the CardIssuer function with card as input, issuer as outputprint issuer``` Now You Try It!
###Code
# TODO Re-write the program here, calling our function.
card = input('Enter your credit card number: ')
issuer = CardIssuer(card)
print(("your card issuer is"), issuer)
###Output
Enter your credit card number: 6666
your card issuer is Discover
###Markdown
Functions are abstractions. Abstractions are good.Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check:
###Code
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
###Output
_____no_output_____
###Markdown
Is that a credit card number or the ramblings of a madman?In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:```WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return TrueWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ```
###Code
#TODO Write your two tests here
###Output
_____no_output_____
###Markdown
Putting it all togetherFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.Here's the Algorithm:```loop input a credit card number if card = 'quit' stop loop if card passes luhn check get issuer print issuer else print invalid card``` Now You Try It
###Code
## TODO Write code here
while True:
card = input('Enter your credit card number: ')
if card == 'quit':
break
if checkLuhn(card) == True:
issuer = CardIssuer(card)
print('Your issuer is', issuer)
else:
print('Invalid card')
break
###Output
Enter your credit card number: 4444
Invalid card
|
2020/1121/hotel_dataWebScraping/WebScraping_mini.ipynb | ###Markdown
hotel key 데이터 구하기
###Code
from bs4 import BeautifulSoup
from selenium import webdriver
from tqdm import tqdm_notebook
import time
global options
options = webdriver.ChromeOptions()
options.add_argument('headless')
options.add_argument('window-size=1920x1080')
options.add_argument('disable-gpu')
options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36")
# location: 지역
local_url={
'서울':'https://hotel.naver.com/hotels/list?destination=place:Seoul',
'부산':'https://hotel.naver.com/hotels/list?destination=place:Busan_Province',
'속초':'https://hotel.naver.com/hotels/list?destination=place:Sokcho',
'경주':'https://hotel.naver.com/hotels/list?destination=place:Gyeongju',
'강릉':'https://hotel.naver.com/hotels/list?destination=place:Gangneung',
'여수':'https://hotel.naver.com/hotels/list?destination=place:Yeosu',
'수원':'https://hotel.naver.com/hotels/list?destination=place:Suwon',
'제주':'https://hotel.naver.com/hotels/list?destination=place:Jeju_Province',
'인천':'https://hotel.naver.com/hotels/list?destination=place:Incheon_Metropolitan_City',
'대구':'https://hotel.naver.com/hotels/list?destination=place:Daegu_Metropolitan_City',
'전주':'https://hotel.naver.com/hotels/list?destination=place:Jeonju',
'광주':'https://hotel.naver.com/hotels/list?destination=place:Gwangju_Metropolitan_City',
'울산':'https://hotel.naver.com/hotels/list?destination=place:Ulsan_Metropolitan_City',
'평창':'https://hotel.naver.com/hotels/list?destination=place:Pyeongchang',
'군산':'https://hotel.naver.com/hotels/list?destination=place:Gunsan',
'양양':'https://hotel.naver.com/hotels/list?destination=place:Yangyang',
'춘천':'https://hotel.naver.com/hotels/list?destination=place:Chuncheon',
'대전':'https://hotel.naver.com/hotels/list?destination=place:Daejeon_Metropolitan_City',
'천안':'https://hotel.naver.com/hotels/list?destination=place:Cheonan',
'세종':'https://hotel.naver.com/hotels/list?destination=place:Sejong',
'청주':'https://hotel.naver.com/hotels/list?destination=place:Cheongju'
}
def get_hotel_data(url):
#driverOpen
driver= webdriver.Chrome('chromedriver.exe', options=options)
driver.get(url)
req= driver.page_source
soup=BeautifulSoup(req,'html.parser')
result=[]
driver.implicitly_wait(5) #5초 대기
detail_keys=soup.select('ul.lst_hotel > li.ng-scope')
result.extend([ key['id'] for key in detail_keys])
#다음버튼을 누른다.
driver.find_element_by_xpath('/html/body/div/div/div[1]/div[2]/div[6]/div[2]/a[2]').click()
#창을 닫는다.
driver.quit()
return result
hotel_keys=[]
for local in tqdm_notebook(local_url.keys()):
hotel_keys.extend(get_hotel_data(local_url[local]))
hotel_keys
len(hotel_keys)
###Output
_____no_output_____
###Markdown
hotel 데이터 구하기
###Code
from bs4 import BeautifulSoup
from selenium import webdriver
import getHotelKeys
import time
global local, hotels
localCode = {
'강원도': 1,
'경기도': 2,
'경상남도': 3,
'경남': 3,
'경상북도': 4,
'경북': 4,
'광주': 5,
'광주광역시': 5,
'대구': 6,
'대전': 7,
'부산': 8,
'부산광역시': 8,
'서울': 9,
'서울특별시': 9,
'세종': 10,
'세종특별시': 10,
'세종특별자치시': 10,
'인천': 11,
'인천광역시': 11,
'울산': 12,
'울산광역시': 12,
'전라남도': 13,
'전남': 13,
'전라북도': 14,
'전북': 14,
'제주도': 15,
'제주': 15,
'제주특별자치도': 15,
'충청남도': 16,
'충남': 16,
'충청북도': 17,
'충북': 17
}
def get_hotel_info(key):
try:
one_hotel_info = {}
driver = webdriver.Chrome('chromedriver.exe', options=options)
# 호텔디테일 페이지 url
url = 'https://hotel.naver.com/hotels/item?hotelId='+key
driver.get(url)
time.sleep(1) #페이지 로드까지 1초 대기
req = driver.page_source
soup = BeautifulSoup(req, 'html.parser')
# 호텔이름
hotel_name = soup.select_one('div.info_wrap > strong.hotel_name').text
one_hotel_info['BO_TITLE'] = hotel_name
# 호텔등급
hotel_rank = soup.find('span', class_='grade').text
if hotel_rank in ['1성급', '2성급', '3성급', '4성급', '5성급']:
one_hotel_info['HOTEL_RANK']=int(hotel_rank[0])
else:
one_hotel_info['HOTEL_RANK']=None
# 호텔주소
hotel_addr_list= [addr for addr in soup.select_one('p.addr').text.split(', ')]
one_hotel_info['HOTEL_ADDRESS']=hotel_addr_list[0]
one_hotel_info['HOTEL_LOCAL_CODE']=localCode[hotel_addr_list[-2]]
# 호텔소개
driver.find_element_by_class_name('more').click() # 더보기 버튼을 누른다.
time.sleep(1) # 예약페이지 로드 1초 대기
hotel_content = soup.select_one('div.desc_wrap.ng-scope > p.txt.ng-binding')
one_hotel_info['BO_CONTENT'] = hotel_content.text
# 호텔옵션
hotel_options = [option.text for option in soup.select('i.feature')]
one_hotel_info['HOTEL_REAL_OPTIONS'] = hotel_options
driver.quit()
return one_hotel_info
except Exception:
driver.quit()
pass
hotels=[]
for key in tqdm_notebook( hotel_keys):
hotels.append(get_hotel_info(key))
hotels
len(hotels)
###Output
_____no_output_____
###Markdown
10개의 데이터를 오라클테이블에 넣는다.
###Code
hotels[3]
hotel_keys[3]
hotels[44]
hotel_keys[44]
###Output
_____no_output_____
###Markdown
https://hotel.naver.com/hotels/item?hotelId=hotel:Marinabay_Sokcho
###Code
hotels[55]
hotel_keys[55]
###Output
_____no_output_____
###Markdown
https://hotel.naver.com/hotels/item?hotelId=hotel:Hyundai_Soo_Resort_Sokcho
###Code
hotels[120]
hotel_keys[120]
###Output
_____no_output_____
###Markdown
https://hotel.naver.com/hotels/item?hotelId=hotel:Ramada_Plaza_Suwon_Hotel
###Code
hotels[221]
hotel_keys[221]
###Output
_____no_output_____ |
examples/notebooks/WWW/censored_data.ipynb | ###Markdown
Fitting censored dataExperimental measurements are sometimes censored such that we only know partial information about a particular data point. For example, in measuring the lifespan of mice, a portion of them might live through the duration of the study, in which case we only know the lower bound.One of the ways we can deal with this is to use Maximum Likelihood Estimation ([MLE](http://en.wikipedia.org/wiki/Maximum_likelihood)). However, censoring often make analytical solutions difficult even for well known distributions.We can overcome this challenge by converting the MLE into a convex optimization problem and solving it using [CVXPY](http://www.cvxpy.org/en/latest/).This example is adapted from a homework problem from Boyd's [CVX 101: Convex Optimization Course](https://class.stanford.edu/courses/Engineering/CVX101/Winter2014/info). SetupWe will use similar notation here. Suppose we have a linear model:$$ y^{(i)} = c^Tx^{(i)} +\epsilon^{(i)} $$where $y^{(i)} \in \mathbf{R}$, $c \in \mathbf{R}^n$, $k^{(i)} \in \mathbf{R}^n$, and $\epsilon^{(i)}$ is the error and has a normal distribution $N(0, \sigma^2)$ for $ i = 1,\ldots,K$.Then the MLE estimator $c$ is the vector that minimizes the sum of squares of the errors $\epsilon^{(i)}$, namely:$$\begin{array}{ll} \underset{c}{\mbox{minimize}} & \sum_{i=1}^K (y^{(i)} - c^T x^{(i)})^2\end{array}$$In the case of right censored data, only $M$ observations are fully observed and all that is known for the remaining observations is that $y^{(i)} \geq D$ for $i=\mbox{M+1},\ldots,K$ and some constant $D$.Now let's see how this would work in practice. Data Generation
###Code
import numpy as np
n = 30 # number of variables
M = 50 # number of censored observations
K = 200 # total number of observations
np.random.seed(n*M*K)
X = np.random.randn(K*n).reshape(K, n)
c_true = np.random.rand(n)
# generating the y variable
y = X.dot(c_true) + .3*np.sqrt(n)*np.random.randn(K)
# ordering them based on y
order = np.argsort(y)
y_ordered = y[order]
X_ordered = X[order,:]
#finding boundary
D = (y_ordered[M-1] + y_ordered[M])/2.
# applying censoring
y_censored = np.concatenate((y_ordered[:M], np.ones(K-M)*D))
import matplotlib.pyplot as plt
# Show plot inline in ipython.
%matplotlib inline
def plot_fit(fit, fit_label):
plt.figure(figsize=(10,6))
plt.grid()
plt.plot(y_censored, 'bo', label = 'censored data')
plt.plot(y_ordered, 'co', label = 'uncensored data')
plt.plot(fit, 'ro', label=fit_label)
plt.ylabel('y')
plt.legend(loc=0)
plt.xlabel('observations');
###Output
_____no_output_____
###Markdown
Regular OLSLet's see what the OLS result looks like. We'll use the `np.linalg.lstsq` function to solve for our coefficients.
###Code
c_ols = np.linalg.lstsq(X_ordered, y_censored, rcond=None)[0]
fit_ols = X_ordered.dot(c_ols)
plot_fit(fit_ols, 'OLS fit')
###Output
_____no_output_____
###Markdown
We can see that we are systematically overestimating low values of $y$ and vice versa (red vs. cyan). This is caused by our use of censored (blue) observations, which are exerting a lot of leverage and pulling down the trendline to reduce the error between the red and blue points. OLS using uncensored dataA simple way to deal with this while maintaining analytical tractability is to simply ignore all censored observations. $$\begin{array}{ll} \underset{c}{\mbox{minimize}} & \sum_{i=1}^M (y^{(i)} - c^T x^{(i)})^2\end{array}$$Give that our $M$ is much smaller than $K$, we are throwing away the majority of the dataset in order to accomplish this, let's see how this new regression does.
###Code
c_ols_uncensored = np.linalg.lstsq(X_ordered[:M], y_censored[:M], rcond=None)[0]
fit_ols_uncensored = X_ordered.dot(c_ols_uncensored)
plot_fit(fit_ols_uncensored, 'OLS fit with uncensored data only')
bad_predictions = (fit_ols_uncensored<=D) & (np.arange(K)>=M)
plt.plot(np.arange(K)[bad_predictions], fit_ols_uncensored[bad_predictions], color='orange', marker='o', lw=0);
###Output
_____no_output_____
###Markdown
We can see that the fit for the uncensored portion is now vastly improved. Even the fit for the censored data is now relatively unbiased i.e. the fitted values (red points) are now centered around the uncensored obsevations (cyan points).The one glaring issue with this arrangement is that we are now predicting many observations to be _below_ $D$ (orange) even though we are well aware that this is not the case. Let's try to fix this. Using constraints to take into account of censored dataInstead of throwing away all censored observations, lets leverage these observations to enforce the additional information that we know, namely that $y$ is bounded from below. We can do this by setting additional constraints:$$\begin{array}{ll} \underset{c}{\mbox{minimize}} & \sum_{i=1}^M (y^{(i)} - c^T x^{(i)})^2 \\ \mbox{subject to} & c^T x^{(i)} \geq D\\ & \mbox{for } i=\mbox{M+1},\ldots,K\end{array}$$
###Code
import cvxpy as cp
X_uncensored = X_ordered[:M, :]
c = cp.Variable(shape=n)
objective = cp.Minimize(cp.sum_squares(X_uncensored*c - y_ordered[:M]))
constraints = [ X_ordered[M:,:]*c >= D]
prob = cp.Problem(objective, constraints)
result = prob.solve()
c_cvx = np.array(c.value).flatten()
fit_cvx = X_ordered.dot(c_cvx)
plot_fit(fit_cvx, 'CVX fit')
###Output
_____no_output_____
###Markdown
Qualitatively, this already looks better than before as it no longer predicts inconsistent values with respect to the censored portion of the data. But does it do a good job of actually finding coefficients $c$ that are close to our original data?We'll use a simple Euclidean distance $\|c_\mbox{true} - c\|_2$ to compare:
###Code
print("norm(c_true - c_cvx): {:.2f}".format(np.linalg.norm((c_true - c_cvx))))
print("norm(c_true - c_ols_uncensored): {:.2f}".format(np.linalg.norm((c_true - c_ols_uncensored))))
###Output
norm(c_true - c_cvx): 1.49
norm(c_true - c_ols_uncensored): 2.23
|
tutorials/Resnet_TorchVision_Interpret.ipynb | ###Markdown
Model Interpretation for Pretrained ResNet Model This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked image and visualizes the attributions for each pixel by overlaying them on the image.The interpretation algorithms that we use in this notebook are `Integrated Gradients` (w/ and w/o noise tunnel), `GradientShap`, and `Occlusion`. A noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample. **Note:** Before running this tutorial, please install the torchvision, PIL, and matplotlib packages.
###Code
import torch
import torch.nn.functional as F
from PIL import Image
import os
import json
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
import torchvision
from torchvision import models
from torchvision import transforms
from captum.attr import IntegratedGradients
from captum.attr import GradientShap
from captum.attr import Occlusion
from captum.attr import NoiseTunnel
from captum.attr import visualization as viz
###Output
_____no_output_____
###Markdown
1- Loading the model and the dataset Loads pretrained Resnet model and sets it to eval mode
###Code
model = models.resnet18(pretrained=True)
model = model.eval()
###Output
_____no_output_____
###Markdown
Downloads the list of classes/labels for ImageNet dataset and reads them into the memory
###Code
!wget -P $HOME/.torch/models https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
labels_path = os.getenv("HOME") + '/.torch/models/imagenet_class_index.json'
with open(labels_path) as json_data:
idx_to_labels = json.load(json_data)
###Output
_____no_output_____
###Markdown
Defines transformers and normalizing functions for the image.It also loads an image from the `img/resnet/` folder that will be used for interpretation purposes.
###Code
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()
])
transform_normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
img = Image.open('img/resnet/swan-3299528_1280.jpg')
transformed_img = transform(img)
input = transform_normalize(transformed_img)
input = input.unsqueeze(0)
###Output
_____no_output_____
###Markdown
Predict the class of the input image
###Code
output = model(input)
output = F.softmax(output, dim=1)
prediction_score, pred_label_idx = torch.topk(output, 1)
pred_label_idx.squeeze_()
predicted_label = idx_to_labels[str(pred_label_idx.item())][1]
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
2- Gradient-based attribution Let's compute attributions using Integrated Gradients and visualize them on the image. Integrated gradients computes the integral of the gradients of the output of the model for the predicted class `pred_label_idx` with respect to the input image pixels along the path from the black image to our input image.
###Code
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
integrated_gradients = IntegratedGradients(model)
attributions_ig = integrated_gradients.attribute(input, target=pred_label_idx, n_steps=200)
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
Let's visualize the image and corresponding attributions by overlaying the latter on the image.
###Code
default_cmap = LinearSegmentedColormap.from_list('custom blue',
[(0, '#ffffff'),
(0.25, '#000000'),
(1, '#000000')], N=256)
_ = viz.visualize_image_attr(np.transpose(attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
method='heat_map',
cmap=default_cmap,
show_colorbar=True,
sign='positive',
outlier_perc=1)
###Output
_____no_output_____
###Markdown
Let us compute attributions using Integrated Gradients and smoothens them across multiple images generated by a noise tunnel. The latter adds gaussian noise with a std equals to one, 10 times (n_samples=10) to the input. Ultimately, noise tunnel smoothens the attributions across `n_samples` noisy samples using `smoothgrad_sq` technique. `smoothgrad_sq` represents the mean of the squared attributions across `n_samples` samples.
###Code
noise_tunnel = NoiseTunnel(integrated_gradients)
attributions_ig_nt = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
Finally, let us use `GradientShap`, a linear explanation model which uses a distribution of reference samples (in this case two images) to explain predictions of the model. It computes the expectation of gradients for an input which was chosen randomly between the input and a baseline. The baseline is also chosen randomly from given baseline distribution.
###Code
torch.manual_seed(0)
np.random.seed(0)
gradient_shap = GradientShap(model)
# Defining baseline distribution of images
rand_img_dist = torch.cat([input * 0, input * 1])
attributions_gs = gradient_shap.attribute(input,
n_samples=50,
stdevs=0.0001,
baselines=rand_img_dist,
target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_gs.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "absolute_value"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
3- Occlusion-based attribution Now let us try a different approach to attribution. We can estimate which areas of the image are critical for the classifier's decision by occluding them and quantifying how the decision changes.We run a sliding window of size 15x15 (defined via `sliding_window_shapes`) with a stride of 8 along both image dimensions (a defined via `strides`). At each location, we occlude the image with a baseline value of 0 which correspondes to a gray patch (defined via `baselines`).**Note:** this computation might take more than one minute to complete, as the model is evaluated at every position of the sliding window.
###Code
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 8, 8),
target=pred_label_idx,
sliding_window_shapes=(3,15, 15),
baselines=0)
###Output
_____no_output_____
###Markdown
Let us visualize the attribution, focusing on the areas with positive attribution (those that are critical for the classifier's decision):
###Code
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____
###Markdown
The upper part of the goose, especially the beak, seems to be the most critical for the model to predict this class.We can verify this further by occluding the image using a larger sliding window:
###Code
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 50, 50),
target=pred_label_idx,
sliding_window_shapes=(3,60, 60),
baselines=0)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____
###Markdown
Model Interpretation for Pretrained ResNet Model This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked image and visualizes the attributions for each pixel by overlaying them on the image.The interpretation algorithms that we use in this notebook are `Integrated Gradients` (w/ and w/o noise tunnel), `GradientShap`, and `Occlusion`. A noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample. **Note:** Before running this tutorial, please install the torchvision, PIL, and matplotlib packages.
###Code
import torch
import torch.nn.functional as F
from PIL import Image
import os
import json
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
import torchvision
from torchvision import models
from torchvision import transforms
from captum.attr import IntegratedGradients
from captum.attr import GradientShap
from captum.attr import Occlusion
from captum.attr import NoiseTunnel
from captum.attr import visualization as viz
###Output
_____no_output_____
###Markdown
1- Loading the model and the dataset Loads pretrained Resnet model and sets it to eval mode
###Code
model = models.resnet18(pretrained=True)
model = model.eval()
###Output
_____no_output_____
###Markdown
Downloads the list of classes/labels for ImageNet dataset and reads them into the memory
###Code
!wget -P $HOME/.torch/models https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
labels_path = os.getenv("HOME") + '/.torch/models/imagenet_class_index.json'
with open(labels_path) as json_data:
idx_to_labels = json.load(json_data)
###Output
_____no_output_____
###Markdown
Defines transformers and normalizing functions for the image.It also loads an image from the `img/resnet/` folder that will be used for interpretation purposes.
###Code
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()
])
transform_normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
img = Image.open('img/resnet/swan-3299528_1280.jpg')
transformed_img = transform(img)
input = transform_normalize(transformed_img)
input = input.unsqueeze(0)
###Output
_____no_output_____
###Markdown
Predict the class of the input image
###Code
output = model(input)
output = F.softmax(output, dim=1)
prediction_score, pred_label_idx = torch.topk(output, 1)
pred_label_idx.squeeze_()
predicted_label = idx_to_labels[str(pred_label_idx.item())][1]
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
2- Gradient-based attribution Let's compute attributions using Integrated Gradients and visualize them on the image. Integrated gradients computes the integral of the gradients of the output of the model for the predicted class `pred_label_idx` with respect to the input image pixels along the path from the black image to our input image.
###Code
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
integrated_gradients = IntegratedGradients(model)
attributions_ig = integrated_gradients.attribute(input, target=pred_label_idx, n_steps=200)
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
Let's visualize the image and corresponding attributions by overlaying the latter on the image.
###Code
default_cmap = LinearSegmentedColormap.from_list('custom blue',
[(0, '#ffffff'),
(0.25, '#000000'),
(1, '#000000')], N=256)
_ = viz.visualize_image_attr(np.transpose(attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
method='heat_map',
cmap=default_cmap,
show_colorbar=True,
sign='positive',
outlier_perc=1)
###Output
_____no_output_____
###Markdown
Let us compute attributions using Integrated Gradients and smoothens them across multiple images generated by a noise tunnel. The latter adds gaussian noise with a std equals to one, 10 times (nt_samples=10) to the input. Ultimately, noise tunnel smoothens the attributions across `nt_samples` noisy samples using `smoothgrad_sq` technique. `smoothgrad_sq` represents the mean of the squared attributions across `nt_samples` samples.
###Code
noise_tunnel = NoiseTunnel(integrated_gradients)
attributions_ig_nt = noise_tunnel.attribute(input, nt_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
Finally, let us use `GradientShap`, a linear explanation model which uses a distribution of reference samples (in this case two images) to explain predictions of the model. It computes the expectation of gradients for an input which was chosen randomly between the input and a baseline. The baseline is also chosen randomly from given baseline distribution.
###Code
torch.manual_seed(0)
np.random.seed(0)
gradient_shap = GradientShap(model)
# Defining baseline distribution of images
rand_img_dist = torch.cat([input * 0, input * 1])
attributions_gs = gradient_shap.attribute(input,
n_samples=50,
stdevs=0.0001,
baselines=rand_img_dist,
target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_gs.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "absolute_value"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
3- Occlusion-based attribution Now let us try a different approach to attribution. We can estimate which areas of the image are critical for the classifier's decision by occluding them and quantifying how the decision changes.We run a sliding window of size 15x15 (defined via `sliding_window_shapes`) with a stride of 8 along both image dimensions (a defined via `strides`). At each location, we occlude the image with a baseline value of 0 which correspondes to a gray patch (defined via `baselines`).**Note:** this computation might take more than one minute to complete, as the model is evaluated at every position of the sliding window.
###Code
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 8, 8),
target=pred_label_idx,
sliding_window_shapes=(3,15, 15),
baselines=0)
###Output
_____no_output_____
###Markdown
Let us visualize the attribution, focusing on the areas with positive attribution (those that are critical for the classifier's decision):
###Code
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____
###Markdown
The upper part of the goose, especially the beak, seems to be the most critical for the model to predict this class.We can verify this further by occluding the image using a larger sliding window:
###Code
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 50, 50),
target=pred_label_idx,
sliding_window_shapes=(3,60, 60),
baselines=0)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____
###Markdown
Model Interpretation for Pretrained ResNet Model This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked image and visualizes the attributions for each pixel by overlaying them on the image.The interpretation algorithms that we use in this notebook are `Integrated Gradients` (w/ and w/o noise tunnel), `GradientShap`, and `Occlusion`. A noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample. **Note:** Before running this tutorial, please install the torchvision, PIL, and matplotlib packages.
###Code
import torch
import torch.nn.functional as F
from PIL import Image
import os
import json
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
import torchvision
from torchvision import models
from torchvision import transforms
from captum.attr import IntegratedGradients
from captum.attr import GradientShap
from captum.attr import Occlusion
from captum.attr import NoiseTunnel
from captum.attr import visualization as viz
###Output
_____no_output_____
###Markdown
1- Loading the model and the dataset Loads pretrained Resnet model and sets it to eval mode
###Code
model = models.resnet18(pretrained=True)
model = model.eval()
###Output
_____no_output_____
###Markdown
Downloads the list of classes/labels for ImageNet dataset and reads them into the memory
###Code
!wget -P $HOME/.torch/models https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
labels_path = os.getenv("HOME") + '/.torch/models/imagenet_class_index.json'
with open(labels_path) as json_data:
idx_to_labels = json.load(json_data)
###Output
_____no_output_____
###Markdown
Defines transformers and normalizing functions for the image.It also loads an image from the `img/resnet/` folder that will be used for interpretation purposes.
###Code
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()
])
transform_normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
img = Image.open('img/resnet/swan-3299528_1280.jpg')
transformed_img = transform(img)
input = transform_normalize(transformed_img)
input = input.unsqueeze(0)
###Output
_____no_output_____
###Markdown
Predict the class of the input image
###Code
output = model(input)
output = F.softmax(output, dim=1)
prediction_score, pred_label_idx = torch.topk(output, 1)
pred_label_idx.squeeze_()
predicted_label = idx_to_labels[str(pred_label_idx.item())][1]
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
2- Gradient-based attribution Let's compute attributions using Integrated Gradients and visualize them on the image. Integrated gradients computes the integral of the gradients of the output of the model for the predicted class `pred_label_idx` with respect to the input image pixels along the path from the black image to our input image.
###Code
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
integrated_gradients = IntegratedGradients(model)
attributions_ig = integrated_gradients.attribute(input, target=pred_label_idx, n_steps=200)
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
Let's visualize the image and corresponding attributions by overlaying the latter on the image.
###Code
default_cmap = LinearSegmentedColormap.from_list('custom blue',
[(0, '#ffffff'),
(0.25, '#000000'),
(1, '#000000')], N=256)
_ = viz.visualize_image_attr(np.transpose(attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
method='heat_map',
cmap=default_cmap,
show_colorbar=True,
sign='positive',
outlier_perc=1)
###Output
_____no_output_____
###Markdown
Let us compute attributions using Integrated Gradients and smoothens them across multiple images generated by a noise tunnel. The latter adds gaussian noise with a std equals to one, 10 times (n_samples=10) to the input. Ultimately, noise tunnel smoothens the attributions across `n_samples` noisy samples using `smoothgrad_sq` technique. `smoothgrad_sq` represents the mean of the squared attributions across `n_samples` samples.
###Code
noise_tunnel = NoiseTunnel(integrated_gradients)
attributions_ig_nt = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
Finally, let us use `GradientShap`, a linear explanation model which uses a distribution of reference samples (in this case two images) to explain predictions of the model. It computes the expectation of gradients for an input which was chosen randomly between the input and a baseline. The baseline is also chosen randomly from given baseline distribution.
###Code
torch.manual_seed(0)
np.random.seed(0)
gradient_shap = GradientShap(model)
# Defining baseline distribution of images
rand_img_dist = torch.cat([input * 0, input * 1])
attributions_gs = gradient_shap.attribute(input,
n_samples=50,
stdevs=0.0001,
baselines=rand_img_dist,
target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_gs.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "absolute_value"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
3- Occlusion-based attribution Now let us try a different approach to attribution. We can estimate which areas of the image are critical for the classifier's decision by occluding them and quantifying how the decision changes.We run a sliding window of size 15x15 (defined via `sliding_window_shapes`) with a stride of 8 along both image dimensions (a defined via `strides`). At each location, we occlude the image with a baseline value of 0 which correspondes to a gray patch (defined via `baselines`).**Note:** this computation might take more than one minute to complete, as the model is evaluated at every position of the sliding window.
###Code
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 8, 8),
target=pred_label_idx,
sliding_window_shapes=(3,15, 15),
baselines=0)
###Output
_____no_output_____
###Markdown
Let us visualize the attribution, focusing on the areas with positive attribution (those that are critical for the classifier's decision):
###Code
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____
###Markdown
The upper part of the goose, espectially the beak, seems to be the most critical for the model to predict this class.We can verify this further by occluding the image using a larger sliding window:
###Code
occlusion = Occlusion(model)
rand_img_dist = torch.cat([input * 0, input * 1])
attributions_occ = occlusion.attribute(input,
strides = (3, 50, 50),
target=pred_label_idx,
sliding_window_shapes=(3,60, 60),
baselines=0)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____
###Markdown
Model interpretation for Pretrained ResNet Model This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked images and visualizes the attributions for each pixel by overlaying them on the image.The interpretation algorithms that we use in this notebook are Integrated Gradients (w/o noise tunnel) and GradientShap. Noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample. **Note:** Before running this tutorial, please install the torchvision, PIL, and matplotlib packages.
###Code
import torch
import torch.nn.functional as F
from PIL import Image
import os
import json
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
import torchvision
from torchvision import models
from torchvision import transforms
from captum.attr import IntegratedGradients
from captum.attr import GradientShap
from captum.attr import Saliency
from captum.attr import NoiseTunnel
from captum.attr import visualization as viz
###Output
_____no_output_____
###Markdown
Loads pretrained Resnet model and sets it to eval mode
###Code
model = models.resnet18(pretrained=True)
model = model.eval()
###Output
_____no_output_____
###Markdown
Downloads the list of classes/labels for ImageNet dataset and reads them into the memory
###Code
!wget -P $HOME/.torch/models https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
labels_path = os.getenv("HOME") + '/.torch/models/imagenet_class_index.json'
with open(labels_path) as json_data:
idx_to_labels = json.load(json_data)
###Output
_____no_output_____
###Markdown
Defines transformers and normalizing functions for the image.It also loads an image from the `img/resnet/` folder that will be used for interpretation purposes.
###Code
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()
])
transform_normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
img = Image.open('img/resnet/swan-3299528_1280.jpg')
transformed_img = transform(img)
input = transform_normalize(transformed_img)
input = input.unsqueeze(0)
###Output
_____no_output_____
###Markdown
Predict the class of the input image
###Code
output = model(input)
output = F.softmax(output, dim=1)
prediction_score, pred_label_idx = torch.topk(output, 1)
pred_label_idx.squeeze_()
predicted_label = idx_to_labels[str(pred_label_idx.item())][1]
###Output
_____no_output_____
###Markdown
This function is used to visualize the image and corresponding attributions by overlaying the latter on the image. Computes attributions using Integrated Gradients and visualizes them on the image. Integrated gradients computes the integral of the gradients of the output of the model for the predicted class `pred_label_idx` with respect to the input image pixels along the path from the black image to our input image.
###Code
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
integrated_gradients = IntegratedGradients(model)
attributions_ig = integrated_gradients.attribute(input, target=pred_label_idx, n_steps=200)
default_cmap = LinearSegmentedColormap.from_list('custom blue',
[(0, '#ffffff'),
(0.25, '#000000'),
(1, '#000000')], N=256)
_ = viz.visualize_image_attr(np.transpose(attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
method='heat_map',
cmap=default_cmap,
show_colorbar=True,
sign='positive',
outlier_perc=1)
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
Computes attributions using Integrated Gradients and smoothens them across multiple images generated by noise tunnel. The latter adds gaussing noise with a std equals to one, 10 times (n_samples=10) to the input. Ultimately, noise tunnel smoothens the attributions across `n_samples` noisy samples using `smoothgrad_sq` technique. `smoothgrad_sq` represents the mean of the squared attributions across `n_samples` samples.
###Code
noise_tunnel = NoiseTunnel(integrated_gradients)
attributions_ig_nt = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
GradientShap is a linear explanation model which uses a distribution of reference samples (in this case two images) to explain predictions of the model. It computes the expectation of gradients for an input which was chosen randomly between the input and a baseline. The baseline is also chosen randomly from given baseline distribution.
###Code
torch.manual_seed(0)
np.random.seed(0)
gradient_shap = GradientShap(model)
# Defining baseline distribution of images
rand_img_dist = torch.cat([input * 0, input * 255])
attributions_gs = gradient_shap.attribute(input,
n_samples=50,
stdevs=0.0001,
baselines=rand_img_dist,
target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_gs.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "absolute_value"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
Model Interpretation for Pretrained ResNet Model This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked image and visualizes the attributions for each pixel by overlaying them on the image.The interpretation algorithms that we use in this notebook are `Integrated Gradients` (w/ and w/o noise tunnel), `GradientShap`, and `Occlusion`. A noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample. **Note:** Before running this tutorial, please install the torchvision, PIL, and matplotlib packages.
###Code
import torch
import torch.nn.functional as F
from PIL import Image
import os
import json
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
import torchvision
from torchvision import models
from torchvision import transforms
from captum.attr import IntegratedGradients
from captum.attr import GradientShap
from captum.attr import Occlusion
from captum.attr import NoiseTunnel
from captum.attr import visualization as viz
###Output
_____no_output_____
###Markdown
1- Loading the model and the dataset Loads pretrained Resnet model and sets it to eval mode
###Code
model = models.resnet18(pretrained=True)
model = model.eval()
###Output
_____no_output_____
###Markdown
Downloads the list of classes/labels for ImageNet dataset and reads them into the memory
###Code
!wget -P $HOME/.torch/models https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
labels_path = os.getenv("HOME") + '/.torch/models/imagenet_class_index.json'
with open(labels_path) as json_data:
idx_to_labels = json.load(json_data)
###Output
_____no_output_____
###Markdown
Defines transformers and normalizing functions for the image.It also loads an image from the `img/resnet/` folder that will be used for interpretation purposes.
###Code
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()
])
transform_normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
img = Image.open('img/resnet/swan-3299528_1280.jpg')
transformed_img = transform(img)
input = transform_normalize(transformed_img)
input = input.unsqueeze(0)
###Output
_____no_output_____
###Markdown
Predict the class of the input image
###Code
output = model(input)
output = F.softmax(output, dim=1)
prediction_score, pred_label_idx = torch.topk(output, 1)
pred_label_idx.squeeze_()
predicted_label = idx_to_labels[str(pred_label_idx.item())][1]
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
2- Gradient-based attribution Let's compute attributions using Integrated Gradients and visualize them on the image. Integrated gradients computes the integral of the gradients of the output of the model for the predicted class `pred_label_idx` with respect to the input image pixels along the path from the black image to our input image.
###Code
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
integrated_gradients = IntegratedGradients(model)
attributions_ig = integrated_gradients.attribute(input, target=pred_label_idx, n_steps=200)
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
Let's visualize the image and corresponding attributions by overlaying the latter on the image.
###Code
default_cmap = LinearSegmentedColormap.from_list('custom blue',
[(0, '#ffffff'),
(0.25, '#000000'),
(1, '#000000')], N=256)
_ = viz.visualize_image_attr(np.transpose(attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
method='heat_map',
cmap=default_cmap,
show_colorbar=True,
sign='positive',
outlier_perc=1)
###Output
_____no_output_____
###Markdown
Let us compute attributions using Integrated Gradients and smoothens them across multiple images generated by a noise tunnel. The latter adds gaussian noise with a std equals to one, 10 times (n_samples=10) to the input. Ultimately, noise tunnel smoothens the attributions across `n_samples` noisy samples using `smoothgrad_sq` technique. `smoothgrad_sq` represents the mean of the squared attributions across `n_samples` samples.
###Code
noise_tunnel = NoiseTunnel(integrated_gradients)
attributions_ig_nt = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
Finally, let us use `GradientShap`, a linear explanation model which uses a distribution of reference samples (in this case two images) to explain predictions of the model. It computes the expectation of gradients for an input which was chosen randomly between the input and a baseline. The baseline is also chosen randomly from given baseline distribution.
###Code
torch.manual_seed(0)
np.random.seed(0)
gradient_shap = GradientShap(model)
# Defining baseline distribution of images
rand_img_dist = torch.cat([input * 0, input * 1])
attributions_gs = gradient_shap.attribute(input,
n_samples=50,
stdevs=0.0001,
baselines=rand_img_dist,
target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_gs.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "absolute_value"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
3- Occlusion-based attribution Now let us try a different approach to attribution. We can estimate which areas of the image are critical for the classifier's decision by occluding them and quantifying how the decision changes.We run a sliding window of size 15x15 (defined via `sliding_window_shapes`) with a stride of 8 along both image dimensions (a defined via `strides`). At each location, we occlude the image with a baseline value of 0 which correspondes to a gray patch (defined via `baselines`).**Note:** this computation might take more than one minute to complete, as the model is evaluated at every position of the sliding window.
###Code
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 8, 8),
target=pred_label_idx,
sliding_window_shapes=(3,15, 15),
baselines=0)
###Output
_____no_output_____
###Markdown
Let us visualize the attribution, focusing on the areas with positive attribution (those that are critical for the classifier's decision):
###Code
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____
###Markdown
The upper part of the goose, espectially the beak, seems to be the most critical for the model to predict this class.We can verify this further by occluding the image using a larger sliding window:
###Code
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 50, 50),
target=pred_label_idx,
sliding_window_shapes=(3,60, 60),
baselines=0)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____
###Markdown
Model Interpretation for Pretrained ResNet Model This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked images and visualizes the attributions for each pixel by overlaying them on the image.The interpretation algorithms that we use in this notebook are Integrated Gradients (w/ and w/o noise tunnel) and GradientShap. Noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample. **Note:** Before running this tutorial, please install the torchvision, PIL, and matplotlib packages.
###Code
import torch
import torch.nn.functional as F
from PIL import Image
import os
import json
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
import torchvision
from torchvision import models
from torchvision import transforms
from captum.attr import IntegratedGradients
from captum.attr import GradientShap
from captum.attr import Saliency
from captum.attr import NoiseTunnel
from captum.attr import visualization as viz
###Output
_____no_output_____
###Markdown
Loads pretrained Resnet model and sets it to eval mode
###Code
model = models.resnet18(pretrained=True)
model = model.eval()
###Output
_____no_output_____
###Markdown
Downloads the list of classes/labels for ImageNet dataset and reads them into the memory
###Code
!wget -P $HOME/.torch/models https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
labels_path = os.getenv("HOME") + '/.torch/models/imagenet_class_index.json'
with open(labels_path) as json_data:
idx_to_labels = json.load(json_data)
###Output
_____no_output_____
###Markdown
Defines transformers and normalizing functions for the image.It also loads an image from the `img/resnet/` folder that will be used for interpretation purposes.
###Code
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()
])
transform_normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
img = Image.open('img/resnet/swan-3299528_1280.jpg')
transformed_img = transform(img)
input = transform_normalize(transformed_img)
input = input.unsqueeze(0)
###Output
_____no_output_____
###Markdown
Predict the class of the input image
###Code
output = model(input)
output = F.softmax(output, dim=1)
prediction_score, pred_label_idx = torch.topk(output, 1)
pred_label_idx.squeeze_()
predicted_label = idx_to_labels[str(pred_label_idx.item())][1]
###Output
_____no_output_____
###Markdown
This function is used to visualize the image and corresponding attributions by overlaying the latter on the image. Computes attributions using Integrated Gradients and visualizes them on the image. Integrated gradients computes the integral of the gradients of the output of the model for the predicted class `pred_label_idx` with respect to the input image pixels along the path from the black image to our input image.
###Code
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
integrated_gradients = IntegratedGradients(model)
attributions_ig = integrated_gradients.attribute(input, target=pred_label_idx, n_steps=200)
default_cmap = LinearSegmentedColormap.from_list('custom blue',
[(0, '#ffffff'),
(0.25, '#000000'),
(1, '#000000')], N=256)
_ = viz.visualize_image_attr(np.transpose(attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
method='heat_map',
cmap=default_cmap,
show_colorbar=True,
sign='positive',
outlier_perc=1)
###Output
Predicted: goose ( 0.4569324851036072 )
###Markdown
Computes attributions using Integrated Gradients and smoothens them across multiple images generated by noise tunnel. The latter adds gaussian noise with a std equals to one, 10 times (n_samples=10) to the input. Ultimately, noise tunnel smoothens the attributions across `n_samples` noisy samples using `smoothgrad_sq` technique. `smoothgrad_sq` represents the mean of the squared attributions across `n_samples` samples.
###Code
noise_tunnel = NoiseTunnel(integrated_gradients)
attributions_ig_nt = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
GradientShap is a linear explanation model which uses a distribution of reference samples (in this case two images) to explain predictions of the model. It computes the expectation of gradients for an input which was chosen randomly between the input and a baseline. The baseline is also chosen randomly from given baseline distribution.
###Code
torch.manual_seed(0)
np.random.seed(0)
gradient_shap = GradientShap(model)
# Defining baseline distribution of images
rand_img_dist = torch.cat([input * 0, input * 255])
attributions_gs = gradient_shap.attribute(input,
n_samples=50,
stdevs=0.0001,
baselines=rand_img_dist,
target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_gs.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "absolute_value"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
Model Interpretation for Pretrained ResNet Model This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked image and visualizes the attributions for each pixel by overlaying them on the image.The interpretation algorithms that we use in this notebook are `Integrated Gradients` (w/ and w/o noise tunnel), `GradientShap`, and `Occlusion`. A noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample. **Note:** Before running this tutorial, please install the torchvision, PIL, and matplotlib packages.
###Code
import torch
import torch.nn.functional as F
from PIL import Image
import os
import json
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
import torchvision
from torchvision import models
from torchvision import transforms
from captum.attr import IntegratedGradients
from captum.attr import GradientShap
from captum.attr import Occlusion
from captum.attr import NoiseTunnel
from captum.attr import visualization as viz
###Output
_____no_output_____
###Markdown
1- Loading the model and the dataset Loads pretrained Resnet model and sets it to eval mode
###Code
model = models.resnet18(pretrained=True)
model = model.eval()
###Output
_____no_output_____
###Markdown
Downloads the list of classes/labels for ImageNet dataset and reads them into the memory
###Code
!wget -P $HOME/.torch/models https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
labels_path = os.getenv("HOME") + '/.torch/models/imagenet_class_index.json'
with open(labels_path) as json_data:
idx_to_labels = json.load(json_data)
###Output
_____no_output_____
###Markdown
Defines transformers and normalizing functions for the image.It also loads an image from the `img/resnet/` folder that will be used for interpretation purposes.
###Code
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()
])
transform_normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
img = Image.open('img/resnet/swan-3299528_1280.jpg')
transformed_img = transform(img)
input = transform_normalize(transformed_img)
input = input.unsqueeze(0)
# specify the all-zero baseline in image space
baselines = torch.zeros_like(transformed_img)
baselines = transform_normalize(baselines).unsqueeze(0)
###Output
_____no_output_____
###Markdown
Predict the class of the input image
###Code
output = model(input)
output = F.softmax(output, dim=1)
prediction_score, pred_label_idx = torch.topk(output, 1)
pred_label_idx.squeeze_()
predicted_label = idx_to_labels[str(pred_label_idx.item())][1]
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
###Output
Predicted: goose ( 0.45693349838256836 )
###Markdown
2- Gradient-based attribution Let's compute attributions using Integrated Gradients and visualize them on the image. Integrated gradients computes the integral of the gradients of the output of the model for the predicted class `pred_label_idx` with respect to the input image pixels along the path from the black image to our input image.
###Code
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
integrated_gradients = IntegratedGradients(model)
attributions_ig = integrated_gradients.attribute(input, baselines = baselines, target=pred_label_idx, n_steps=200)
###Output
Predicted: goose ( 0.45693349838256836 )
###Markdown
Let's visualize the image and corresponding attributions by overlaying the latter on the image.
###Code
default_cmap = LinearSegmentedColormap.from_list('custom blue',
[(0, '#ffffff'),
(0.25, '#000000'),
(1, '#000000')], N=256)
_ = viz.visualize_image_attr(np.transpose(attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
method='heat_map',
cmap=default_cmap,
show_colorbar=True,
sign='positive',
outlier_perc=1)
###Output
_____no_output_____
###Markdown
Let us compute attributions using Integrated Gradients and smoothens them across multiple images generated by a noise tunnel. The latter adds gaussian noise with a std equals to one, 10 times (n_samples=10) to the input. Ultimately, noise tunnel smoothens the attributions across `n_samples` noisy samples using `smoothgrad_sq` technique. `smoothgrad_sq` represents the mean of the squared attributions across `n_samples` samples.
###Code
noise_tunnel = NoiseTunnel(integrated_gradients)
attributions_ig_nt = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
Finally, let us use `GradientShap`, a linear explanation model which uses a distribution of reference samples (in this case two images) to explain predictions of the model. It computes the expectation of gradients for an input which was chosen randomly between the input and a baseline. The baseline is also chosen randomly from given baseline distribution.
###Code
torch.manual_seed(0)
np.random.seed(0)
gradient_shap = GradientShap(model)
# Defining baseline distribution of images
rand_img_dist = torch.cat([input * 0, input * 1])
attributions_gs = gradient_shap.attribute(input,
n_samples=50,
stdevs=0.0001,
baselines=rand_img_dist,
target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_gs.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "absolute_value"],
cmap=default_cmap,
show_colorbar=True)
###Output
_____no_output_____
###Markdown
3- Occlusion-based attribution Now let us try a different approach to attribution. We can estimate which areas of the image are critical for the classifier's decision by occluding them and quantifying how the decision changes.We run a sliding window of size 15x15 (defined via `sliding_window_shapes`) with a stride of 8 along both image dimensions (a defined via `strides`). At each location, we occlude the image with a baseline value of 0 which correspondes to a gray patch (defined via `baselines`).**Note:** this computation might take more than one minute to complete, as the model is evaluated at every position of the sliding window.
###Code
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 8, 8),
target = pred_label_idx,
sliding_window_shapes = (3,15, 15),
baselines = baselines)
###Output
_____no_output_____
###Markdown
Let us visualize the attribution, focusing on the areas with positive attribution (those that are critical for the classifier's decision):
###Code
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____
###Markdown
The upper part of the goose, especially the beak, seems to be the most critical for the model to predict this class.We can verify this further by occluding the image using a larger sliding window:
###Code
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 50, 50),
target=pred_label_idx,
sliding_window_shapes=(3,60, 60),
baselines=0)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
###Output
_____no_output_____ |
chapter2/Chapter 2.ipynb | ###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(epochs):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output), dim=1)[1], targets).view(-1)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
_____no_output_____
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
prediction = F.softmax(simplenet(img))
prediction = prediction.argmax()
print(labels[prediction])
###Output
_____no_output_____
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Data Collection
###Code
# download.py
#
# Adapted by jeandersonbc for a more Google Colaboratory friendly code
import os
import sys
import urllib3
from urllib.parse import urlparse
import pandas as pd
import itertools
import shutil
from urllib3.util import Retry
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
classes = ["cat", "fish"]
set_types = ["train", "test", "val"]
def download_image(url, klass, data_type):
basename = os.path.basename(urlparse(url).path)
filename = "{}/{}/{}".format(data_type, klass, basename)
if not os.path.exists(filename):
fetch(url, filename)
def download_csv(url, csvname):
if not os.path.exists(csvname):
fetch(url, csvname)
def fetch(url, filename):
try:
http = urllib3.PoolManager(
retries=Retry(connect=1, read=1, redirect=2))
with http.request("GET", url, preload_content=False) as resp, open(
filename, "wb"
) as out_file:
if resp.status == 200:
shutil.copyfileobj(resp, out_file)
else:
print("Error downloading {}".format(url))
resp.release_conn()
except:
print("Error downloading {}".format(url))
def main():
csv_url = "https://raw.githubusercontent.com/falloutdurham/pytorchupandrunning/master/chapter2/images.csv"
download_csv(csv_url, "images.csv")
if not os.path.exists("images.csv"):
raise Exception("Error: can't find images.csv!")
# get args and create output directory
imagesDF = pd.read_csv("images.csv")
for set_type, klass in list(itertools.product(set_types, classes)):
path = "./{}/{}".format(set_type, klass)
if not os.path.exists(path):
print("Creating directory {}".format(path))
os.makedirs(path)
size = len(imagesDF)
print("Downloading {} images".format(size))
counter = 1
for url, klass, data_type in zip(imagesDF["url"], imagesDF["class"], imagesDF["type"]):
print("progress {}/{}".format(counter, size))
download_image(url, klass, data_type)
counter += 1
main()
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(epochs):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output), dim=1)[1], targets).view(-1)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
_____no_output_____
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
prediction = F.softmax(simplenet(img))
prediction = prediction.argmax()
print(labels[prediction])
###Output
_____no_output_____
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,
transform=img_transforms,
is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,
transform=img_transforms,
is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,
transform=img_transforms,
is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
# !pip install torchsummary
from torchsummary import summary
summary(simplenet, (12288, 84))
print(simplenet)
###Output
SimpleNet(
(fc1): Linear(in_features=12288, out_features=84, bias=True)
(fc2): Linear(in_features=84, out_features=50, bias=True)
(fc3): Linear(in_features=50, out_features=2, bias=True)
)
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print("Device using: ", device)
simplenet.to(device)
###Output
Device using: cpu
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}' \
.format(epoch, training_loss, valid_loss, num_correct / num_examples))
train(simplenet,
optimizer,
torch.nn.CrossEntropyLoss(),
train_data_loader,
val_data_loader,
epochs=5,
device=device)
###Output
Epoch: 1, Training Loss: 2.46, Validation Loss: 6.40, accuracy = 0.35
Epoch: 2, Training Loss: 3.54, Validation Loss: 1.77, accuracy = 0.65
Epoch: 3, Training Loss: 1.21, Validation Loss: 0.66, accuracy = 0.70
Epoch: 4, Training Loss: 0.54, Validation Loss: 0.65, accuracy = 0.71
Epoch: 5, Training Loss: 0.43, Validation Loss: 0.72, accuracy = 0.72
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
img = torch.unsqueeze(img, 0)
simplenet.eval()
prediction = F.softmax(simplenet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])
###Output
fish
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
_____no_output_____
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
img = torch.unsqueeze(img, 0)
simplenet.eval()
prediction = F.softmax(simplenet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])
###Output
_____no_output_____
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(epochs):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
_____no_output_____
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
prediction = F.softmax(simplenet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])
###Output
_____no_output_____
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(epochs):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output), dim=1)[1], targets).view(-1)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
_____no_output_____
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
prediction = F.softmax(simplenet(img))
prediction = prediction.argmax()
print(labels[prediction])
###Output
_____no_output_____
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(epochs):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output), dim=0)[1], targets).view(-1)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
_____no_output_____
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
prediction = F.softmax(simplenet(img))
prediction = prediction.argmax()
print(labels[prediction])
###Output
_____no_output_____
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
import imageio
import matplotlib.pyplot as plt
import os
import numpy as np
baseDir = os.getcwd()
baseFolders = ['test','train','val']
dataFolders = ['fish','cat']
for k1 in np.arange(0,3):
for k2 in np.arange(0,2):
currDir = (baseDir+"\\"+baseFolders[k1]+"\\"+dataFolders[k2])
#print(currDir)
currDir_files = os.listdir(currDir)
#len(currDir_files)
for k in np.arange(0,len(currDir_files)): # enumerate(currDir_files):
currFile = os.path.join(currDir,currDir_files[k])
if (os.path.getsize( currFile)<4000):
os.remove(currFile)
else:
try:
#print(currFile)
fileHandler = open(currFile, "r")
im = imageio.imread(currFile)
except:
fileHandler.close()
#im.close()
print(currFile)
print(fileHandler.closed)
#os.close(fileHandler)
#print(fileHandler.closed)
#fileHandler = close(currFile, "r")
#os.close(currFile)
#os.remove(currFile)
filename = baseDir+ "./val/fish/100_1422.JPG"
print(filename)
im = Image.open(filename)
plt.imshow(im)
def check_image(path):
try:
im = Image.open(path)
#print("A -----"+path)
return True
except:
print("B ----- -----"+path)
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = './train/'
# when using is_valid_file NOTHING WORKS
#train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms)
train_data
val_data_path = "./val/"
# when using is_valid_file NOTHING WORKS
# val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms)
val_data
test_data_path = "./test/"
#test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms)
test_data
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
train_data_path
print(train_data_loader.dataset.imgs[51])
#filename = "./val/fish/100_1422.JPG"
#print(filename)
im = Image.open(filename)
plt.imshow(im)
###Output
('./train/cat\\131545070_970a905e5c.jpg', 0)
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(epochs):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output), dim=1)[1], targets).view(-1)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
<ipython-input-73-f38d79bc5988>:28: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
correct = torch.eq(torch.max(F.softmax(output), dim=1)[1], targets).view(-1)
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/cat/2090435515_841c4a1b6a.jpg")
img = img_transforms(img).to(device)
prediction = F.softmax(simplenet(img))
prediction = prediction.argmax()
print(labels[prediction])
###Output
fish
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
Epoch: 1, Training Loss: 2.43, Validation Loss: 6.28, accuracy = 0.36
Epoch: 2, Training Loss: 3.29, Validation Loss: 0.89, accuracy = 0.69
Epoch: 3, Training Loss: 1.18, Validation Loss: 1.13, accuracy = 0.66
Epoch: 4, Training Loss: 0.83, Validation Loss: 0.80, accuracy = 0.67
Epoch: 5, Training Loss: 0.54, Validation Loss: 0.71, accuracy = 0.70
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
img = torch.unsqueeze(img, 0)
simplenet.eval()
prediction = F.softmax(simplenet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])
###Output
fish
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
from pathlib import Path
def check_image(path):
try:
if '/' not in path:
matches = list(Path('.').glob(f'**/{path}'))
path = matches[0]
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
print(train_data_path)
train_data = torchvision.datasets.ImageFolder(root=train_data_path,
transform=img_transforms,
is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
Epoch: 1, Training Loss: 2.80, Validation Loss: 3.79, accuracy = 0.36
Epoch: 2, Training Loss: 2.29, Validation Loss: 2.28, accuracy = 0.66
Epoch: 3, Training Loss: 1.44, Validation Loss: 1.00, accuracy = 0.55
Epoch: 4, Training Loss: 0.73, Validation Loss: 0.75, accuracy = 0.75
Epoch: 5, Training Loss: 0.48, Validation Loss: 0.70, accuracy = 0.67
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
img = torch.unsqueeze(img, 0)
simplenet.eval()
prediction = F.softmax(simplenet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])
###Output
fish
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
_____no_output_____
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
simplenet.eval()
prediction = F.softmax(simplenet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])
###Output
_____no_output_____
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____
###Markdown
Chapter 2: Our First Model
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
###Output
_____no_output_____
###Markdown
Setting up DataLoadersWe'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images. `check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
###Code
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
###Output
_____no_output_____
###Markdown
Set up the transforms for every image:* Resize to 64x64* Convert to tensor* Normalize using ImageNet mean & std
###Code
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Our First Model, SimpleNetSimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
###Code
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
###Output
_____no_output_____
###Markdown
Create an optimizerHere, we're just using Adam as our optimizer with a learning rate of 0.001.
###Code
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Copy the model to GPUCopy the model to the GPU if available.
###Code
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
###Output
_____no_output_____
###Markdown
Training Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
###Code
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer, torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
###Output
_____no_output_____
###Markdown
Making predictionsLabels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
###Code
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
img = torch.unsqueeze(img, 0)
simplenet.eval()
prediction = F.softmax(simplenet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])
###Output
_____no_output_____
###Markdown
Saving ModelsWe can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
###Code
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
###Output
_____no_output_____ |
deprecated/notebooks/KEGG_datacleaning-withSMILEs.ipynb | ###Markdown
check for reversible reactions
###Code
def get_reaction_list(df_with_reaction_column):
"""get the list of reaction from a dataframe that contains reaction column"""
reaction_list = []
for index,row in df_with_reaction_column.iterrows():
for reaction in row['reaction']:
reaction_split = reaction.split("[RN:")[-1]
if reaction_split.startswith("R") and not reaction_split.startswith("RN"):
for i in reaction_split[:-1].split(" "):
reaction_list.append(i)
return reaction_list
promiscuous_reaction_list = get_reaction_list(compact_promiscuous_df)
len(promiscuous_reaction_list)
def query_reversible_reaction(list_with_reaction):
"""get the list of reversible reaction"""
reversible_reaction = []
for reaction in reaction_list:
reaction_file = REST.kegg_get(reaction).read()
for i in reaction_file.rstrip().split("\n"):
if i.startswith("EQUATION") and "<=>" in i:
reversible_reaction.append(reaction)
return reversible_reaction
#check whether query_reversible_reaction function works.
reaction_file = REST.kegg_get("R00709").read()
for line in reaction_file.rstrip().split("\n"):
if line.startswith("EQUATION") and "<=>" in line:
print ("R00709")
print (line)
#will take forever to run
#reversible_reaction = query_reversible_reaction(promiscuous_reaction_list)
# it seem like all the reactions are reversible
#len(reversible_reaction)
###Output
_____no_output_____
###Markdown
append substrate molecules to product column
###Code
# difficult to use iterrows because of inconsistent index
compact_promiscuous_df.head(10)
rowindex = np.arange(0,len(compact_promiscuous_df))
compact_promiscuous_df_index = compact_promiscuous_df.set_index(rowindex)
def combine_substrate_product(df_with_ordered_index):
"""append substrates to product column. should not be run multiple times.
it will append substrates multiple times"""
newdf = df_with_ordered_index
for index,row in df_with_ordered_index.iterrows():
productlist = row['product']
substratelist = row['substrate']
newdf.iloc[index,2] = productlist + substratelist
return newdf
# do not run this multiple times!
combined_df = combine_substrate_product(compact_promiscuous_df_index)
# check whether it is added multiple times
# if appended multiple times, need to rerun cells from the very beginning
combined_df.iloc[0,2]
compact_combined_df = combined_df[['entry','product']]
compact_combined_df.head(10)
# save substrate and product combined dataframe to csv
# might remove this dataframe from the git repo soon
# substrate_to_product_promiscuous_df.to_csv("../datasets/substrate_product_combined_promiscuous.csv")
###Output
_____no_output_____
###Markdown
cofactor removal
###Code
len(compact_combined_df)
# test text splicing
test='aldehyde [CPD:C00071]'
test[-7:-1]
def get_cofactor_list(cofactor_df,CPDcolumn):
cofactor_list = [cofactor[4:10] for cofactor in cofactor_df[CPDcolumn]]
return cofactor_list
cofactor_df=pd.read_csv("../datasets/cofactor_list.csv")
cofactor_df.head(10)
cofactor_list = get_cofactor_list(cofactor_df,"CPD")
cofactor_list
def get_cpd(compound_full):
"when full name of compound inserted, return cpd id"
cpd = compound_full[-7:-1]
return cpd
def rm_cofactor_only_cpd(df,compound_columnname,cofactor_list):
newdf = df.drop(["product"],axis=1)
cleaned_compound_column = []
for index,row in df.iterrows():
cpd_compound_list =[]
for compound in row[compound_columnname]:
if "CPD" in compound:
onlycpd = get_cpd(compound)
if onlycpd not in cofactor_list:
cpd_compound_list.append(onlycpd)
else:
pass
if len(cpd_compound_list)==0:
cleaned_compound_column.append("NA")
else:
cleaned_compound_column.append(cpd_compound_list)
newdf['product'] = cleaned_compound_column
return newdf
cleaned_df_productinList = rm_cofactor_only_cpd(compact_combined_df,'product',cofactor_list)
#cleaned_promiscuous_enzyme_df.to_csv("../datasets/cleaned_promiscous_enzyme_df.csv", header=['entry','product'])
#remove enzymes with no products
noNAenzyme = cleaned_df_productinList.loc[cleaned_df_productinList['product']!='NA']
len(noNAenzyme)
###Output
_____no_output_____
###Markdown
format the dataframe to be easily applicable for pubchem ID search and SMILES string search
###Code
noNAenzyme.rename(columns={'product':'products'}, inplace=True)
noNAenzyme
def itemlist_eachrow(df,oldcolumnname,newcolumnname,enzymecolumn):
newdf = df[oldcolumnname].\
apply(pd.Series).\
merge(df, left_index=True, right_index=True).\
drop([oldcolumnname],axis=1).\
melt(id_vars=[enzymecolumn],value_name=newcolumnname).\
sort_values(by=[enzymecolumn]).\
dropna().\
drop(columns=["variable"])
return newdf
expanded_noNAenzyme = itemlist_eachrow(noNAenzyme,"products","product","entry")
#dropped duplicates within product column
expanded_noNAenzyme.drop_duplicates(['entry','product'],keep='first',inplace=True)
expanded_noNAenzyme
len(expanded_noNAenzyme)
###Output
_____no_output_____
###Markdown
pubchemID search
###Code
import re
from Bio.KEGG import Compound
def compound_records_to_df(file_path):
"""
Input should be a filepath string pointing to a gzipped text file of KEGG enzyme records.
Function parses all records using Biopython.Bio.KEGG.Compound parser, and returns a pandas dataframe.
"""
compound_fields = [method for method in dir(Compound.Record()) if not method.startswith('_')]
data_matrix = []
with gzip.open(file_path, 'rt') as file:
for record in Compound.parse(file):
data_matrix.append([getattr(record, field) for field in compound_fields])
compound_df = pd.DataFrame(data_matrix, columns=compound_fields)
return compound_df
compound_df = compound_records_to_df('../datasets/KEGG_compound_db_entries.gz')
def extract_PubChem_id(field):
"""
This function uses regular expressions to extract the PubChem compound IDs from a field in a record
"""
regex = "'PubChem', \[\'(\d+)\'\]\)" # matches "'PubChem', ['" characters exactly, then captures any number of digits (\d+), before another literal "']" character match
ids = re.findall(regex, str(field), re.IGNORECASE)
if len(ids) > 0:
pubchem_id = ids[0]
else:
pubchem_id = ''
return pubchem_id
PubChemID_list = []
for _, row in compound_df.iterrows():
pubchem_id = extract_PubChem_id(row['dblinks'])
PubChemID_list.append(pubchem_id)
compound_df['PubChem'] = PubChemID_list
compound_df.head(10)
joint_enzyme_compound_df = expanded_noNAenzyme.merge(compound_df, left_on='product', right_on='entry')
joint_enzyme_compound_df.head(10)
compact_joint_enzyme_compound_df = joint_enzyme_compound_df[['entry_x','product','PubChem']].\
sort_values(by=['entry_x'])
compact_joint_enzyme_compound_df.head(10)
print (len(compact_joint_enzyme_compound_df))
#rename column names
compact_joint_enzyme_compound_df.rename(columns={'entry_x':'entry','product':'KEGG'},inplace=True)
compact_joint_enzyme_compound_df = compact_joint_enzyme_compound_df.loc[compact_joint_enzyme_compound_df['PubChem']!='']
len(compact_joint_enzyme_compound_df)
compact_joint_enzyme_compound_df.columns
shortened_df = compact_joint_enzyme_compound_df.copy()
short_50 = shortened_df.head(50)
###Output
_____no_output_____
###Markdown
def sid_to_smiles(sid): """Takes an SID and prints the associated SMILES string.""" substance = pc.Substance.from_sid(sid) cid = substance.standardized_cid compound = pc.get_compounds(cid)[0] return compound.isomeric_smilesdef kegg_df_to_smiles(kegg_df): """Takes a pandas dataframe that includes a column of SIDs, gets the isomeric SMILES for each SID, stores them as a list, then adds a SMILES column.""" res = [] unsuccessful_list = [] for i in range(len(kegg_df)): sid = kegg_df.iloc[i, 2] CHANGE THIS 1 TO THE PROPER COLUMN NUMBER FOR SID try: result = sid_to_smiles(sid) res.append(result) except: res.append('none') unsuccessful_list.append(sid) pass kegg_df.insert(3, column='SMILES', value=res) Change this 2 to the number where the smiles column should be kegg_df.to_csv(r'../datasets/cleaned_kegg_with_smiles') return kegg_df, unsuccessful_list
###Code
def sid_to_smiles(sid):
"""Takes an SID and prints the associated SMILES string."""
substance = pc.Substance.from_sid(sid)
cid = substance.standardized_cid
compound = pc.get_compounds(cid)[0]
return compound.isomeric_smiles, cid
def kegg_df_to_smiles(kegg_df):
"""Takes a pandas dataframe that includes a column of SIDs, gets the isomeric SMILES for each SID, stores them as a list, then adds a SMILES column."""
res = []
cid_list = []
unsuccessful_list = []
for i in range(len(kegg_df)):
sid = kegg_df.iloc[i, 2] #CHANGE THIS 1 TO THE PROPER COLUMN NUMBER FOR SID
try:
smile_result = sid_to_smiles(sid)[0]
res.append(smile_result)
cid_result = sid_to_smiles(sid)[1]
cid_list.append(cid_result)
except:
res.append('none')
cid_list.append('none')
unsuccessful_list.append(sid)
pass
kegg_df.insert(3, column='CID', value=cid_list)
kegg_df.insert(4, column='SMILES', value=res) #Change this 2 to the number where the smiles column should be
kegg_df.to_csv(r'../datasets/df_cleaned_kegg_with_smiles.csv')
return kegg_df, unsuccessful_list
kegg_df_to_smiles(compact_joint_enzyme_compound_df)
ugh = ['3371',
'3526',
'4627',
'4764',
'17396586',
'17396584',
'96023258',
'96023257',
'96023254',
'96023253',
'96023256',
'96023255',
'135626278',
'135626279',
'7881',
'7880',
'8711',
'17396499',
'17396498',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'5930',
'6046',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'135626312',
'17396588',
'172232403',
'5930',
'6046',
'350078274',
'350078276',
'350078273',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'3936',
'3931',
'3931',
'3936',
'3439',
'3438',
'3931',
'3936',
'3439',
'3438',
'3438',
'3439',
'3439',
'3438',
'3439',
'3438',
'3439',
'3438',
'3438',
'3439',
'3439',
'3438',
'3931',
'3936',
'3439',
'3438',
'4242',
'4245',
'336445168',
'336445170',
'336445167',
'4245',
'336445169',
'4242',
'14292',
'4245',
'4242',
'254741478',
'254741479',
'4242',
'4245',
'4245',
'4242',
'4245',
'4242',
'4245',
'4242',
'4242',
'336445167',
'4245',
'4245',
'4242',
'336445169',
'3438',
'3439',
'336445167',
'3439',
'3438',
'4242',
'4245',
'4245',
'4242',
'4242',
'4245',
'4245',
'336445169',
'336445167',
'4242',
'5930',
'6046',
'6046',
'5930',
'4242',
'4245',
'6046',
'5930',
'3438',
'3439',
'4245',
'4242',
'3438',
'3439',
'7734',
'7735',
'340125640',
'328082950',
'340125643',
'96024377',
'3426',
'3425',
'17396594',
'17396595',
'3438',
'3439',
'6918',
'7171',
'3438',
'3439',
'3426',
'3425',
'3425',
'3426',
'3636',
'5929',
'3635',
'6627',
'5741',
'3887',
'315431311',
'6828',
'315431312',
'3887',
'6828',
'350078249',
'350078250',
'350078252',
'350078251',
'3341',
'6549',
'6915',
'5703',
'4730',
'96023244',
'7070',
'7272',
'6622',
'315431316',
'315431317',
'6902',
'5130',
'340125666',
'3342',
'340125665',
'340125667',
'329728080',
'329728081',
'329728079',
'4794',
'5136',
'329728078',
'96023263',
'4794',
'5136',
'328082930',
'328082931',
'328082929',
'254741191',
'7462',
'7362',
'7362',
'135626269',
'135626270',
'135626271',
'5260',
'6747',
'254741326',
'8311',
'340125667',
'340125665',
'340125666',
'5260',
'135626270',
'333497482',
'333497484',
'333497483',
'5930',
'6046',
'9490',
'47205174',
'9491',
'3341',
'5330',
'3501',
'124489752',
'124489757',
'5279',
'124489750',
'3914',
'3643',
'3465',
'3635',
'3636',
'5703',
'329728063',
'47205136',
'47205137',
'376219005',
'3426',
'3425',
'4998',
'3712',
'3360',
'3465',
'135626269',
'135626270',
'5260',
'47205322',
'3887',
'3371',
'7130',
'7224',
'4416',
'3462',
'4266',
'3360',
'8030',
'254741203',
'8029',
'5259',
'5703',
'3887',
'5813',
'3360',
'3740',
'4348',
'3501',
'17396557',
'3746',
'5502',
'5185',
'17396556',
'5502',
'3746',
'17396557',
'5185',
'17396556',
'5813',
'3445',
'3348',
'3911',
'47205545',
'47205548',
'3341',
'3341',
'3348',
'3341',
'3341',
'3348',
'7736']
len(ugh) # invalid SIDs because of KEGg
myset = set(ugh)
len(myset) # unique invalid SIDs because of KEGG
pd.read_csv('../datasets/playground_df_cleaned_kegg_with_smiles.csv')
###Output
_____no_output_____ |
code/ex03/ex03_moritz_eck.ipynb | ###Markdown
Deep Learning for Natural Language Processing: Exercise 03Moritz Eck (14-715-296)University of ZurichPlease see the section right at the bottom of this notebook for the discussion of the results as well as the answers to the exercise questions. Mount Google Drive (Please do this step first => only needs to be done once!)This mounts the user's Google Drive directly.On my personal machine inside the Google Drive folder the input files are stored in the following folder: **~/Google Drive/Colab Notebooks/ex02/**Below I have defined the default filepath as **default_fp = 'drive/Colab Notebooks/ex02/'**.Please change the filepath to the location where you have the input file and the glove embeddings saved.
###Code
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
###Output
_____no_output_____
###Markdown
**Mount Google Drive**
###Code
!mkdir -p drive
!google-drive-ocamlfuse drive
###Output
_____no_output_____
###Markdown
Install the required packages
###Code
!pip install pandas
!pip install numpy
!pip install tensorflow
###Output
_____no_output_____
###Markdown
Check that the GPU is used
###Code
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
# encoding: UTF-8
# Copyright 2017 Google.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import glob
import sys
# size of the alphabet that we work with
ALPHASIZE = 98
# Specification of the supported alphabet (subset of ASCII-7)
# 10 line feed LF
# 32-64 numbers and punctuation
# 65-90 upper-case letters
# 91-97 more punctuation
# 97-122 lower-case letters
# 123-126 more punctuation
def convert_from_alphabet(a):
"""Encode a character
:param a: one character
:return: the encoded value
"""
if a == 9:
return 1
if a == 10:
return 127 - 30 # LF
elif 32 <= a <= 126:
return a - 30
else:
return 0 # unknown
# encoded values:
# unknown = 0
# tab = 1
# space = 2
# all chars from 32 to 126 = c-30
# LF mapped to 127-30
def convert_to_alphabet(c, avoid_tab_and_lf=False):
"""Decode a code point
:param c: code point
:param avoid_tab_and_lf: if True, tab and line feed characters are replaced by '\'
:return: decoded character
"""
if c == 1:
return 32 if avoid_tab_and_lf else 9 # space instead of TAB
if c == 127 - 30:
return 92 if avoid_tab_and_lf else 10 # \ instead of LF
if 32 <= c + 30 <= 126:
return c + 30
else:
return 0 # unknown
def encode_text(s):
"""Encode a string.
:param s: a text string
:return: encoded list of code points
"""
return list(map(lambda a: convert_from_alphabet(ord(a)), s))
def decode_to_text(c, avoid_tab_and_lf=False):
"""Decode an encoded string.
:param c: encoded list of code points
:param avoid_tab_and_lf: if True, tab and line feed characters are replaced by '\'
:return:
"""
return "".join(map(lambda a: chr(convert_to_alphabet(a, avoid_tab_and_lf)), c))
def sample_from_probabilities(probabilities, topn=ALPHASIZE):
"""Roll the dice to produce a random integer in the [0..ALPHASIZE] range,
according to the provided probabilities. If topn is specified, only the
topn highest probabilities are taken into account.
:param probabilities: a list of size ALPHASIZE with individual probabilities
:param topn: the number of highest probabilities to consider. Defaults to all of them.
:return: a random integer
"""
p = np.squeeze(probabilities)
p[np.argsort(p)[:-topn]] = 0
p = p / np.sum(p)
return np.random.choice(ALPHASIZE, 1, p=p)[0]
def rnn_minibatch_sequencer(raw_data, batch_size, sequence_size, nb_epochs):
"""
Divides the data into batches of sequences so that all the sequences in one batch
continue in the next batch. This is a generator that will keep returning batches
until the input data has been seen nb_epochs times. Sequences are continued even
between epochs, apart from one, the one corresponding to the end of raw_data.
The remainder at the end of raw_data that does not fit in an full batch is ignored.
:param raw_data: the training text
:param batch_size: the size of a training minibatch
:param sequence_size: the unroll size of the RNN
:param nb_epochs: number of epochs to train on
:return:
x: one batch of training sequences
y: on batch of target sequences, i.e. training sequences shifted by 1
epoch: the current epoch number (starting at 0)
"""
data = np.array(raw_data)
data_len = data.shape[0]
# using (data_len-1) because we must provide for the sequence shifted by 1 too
nb_batches = (data_len - 1) // (batch_size * sequence_size)
assert nb_batches > 0, "Not enough data, even for a single batch. Try using a smaller batch_size."
rounded_data_len = nb_batches * batch_size * sequence_size
xdata = np.reshape(data[0:rounded_data_len], [batch_size, nb_batches * sequence_size])
ydata = np.reshape(data[1:rounded_data_len + 1], [batch_size, nb_batches * sequence_size])
for epoch in range(nb_epochs):
for batch in range(nb_batches):
x = xdata[:, batch * sequence_size:(batch + 1) * sequence_size]
y = ydata[:, batch * sequence_size:(batch + 1) * sequence_size]
x = np.roll(x, -epoch, axis=0) # to continue the text from epoch to epoch (do not reset rnn state!)
y = np.roll(y, -epoch, axis=0)
yield x, y, epoch
def find_book(index, bookranges):
return next(
book["name"] for book in bookranges if (book["start"] <= index < book["end"]))
def find_book_index(index, bookranges):
return next(
i for i, book in enumerate(bookranges) if (book["start"] <= index < book["end"]))
def print_learning_learned_comparison(X, Y, losses, bookranges, batch_loss, batch_accuracy, epoch_size, index, epoch):
"""Display utility for printing learning statistics"""
print()
# epoch_size in number of batches
batch_size = X.shape[0] # batch_size in number of sequences
sequence_len = X.shape[1] # sequence_len in number of characters
start_index_in_epoch = index % (epoch_size * batch_size * sequence_len)
for k in range(1):
index_in_epoch = index % (epoch_size * batch_size * sequence_len)
decx = decode_to_text(X[k], avoid_tab_and_lf=True)
decy = decode_to_text(Y[k], avoid_tab_and_lf=True)
bookname = find_book(index_in_epoch, bookranges)
formatted_bookname = "{: <10.40}".format(bookname) # min 10 and max 40 chars
epoch_string = "{:4d}".format(index) + " (epoch {}) ".format(epoch)
loss_string = "loss: {:.5f}".format(losses[k])
print_string = epoch_string + formatted_bookname + " │ {} │ {} │ {}"
print(print_string.format(decx, decy, loss_string))
index += sequence_len
# box formatting characters:
# │ \u2502
# ─ \u2500
# └ \u2514
# ┘ \u2518
# ┴ \u2534
# ┌ \u250C
# ┐ \u2510
format_string = "└{:─^" + str(len(epoch_string)) + "}"
format_string += "{:─^" + str(len(formatted_bookname)) + "}"
format_string += "┴{:─^" + str(len(decx) + 2) + "}"
format_string += "┴{:─^" + str(len(decy) + 2) + "}"
format_string += "┴{:─^" + str(len(loss_string)) + "}┘"
footer = format_string.format('INDEX', 'BOOK NAME', 'TRAINING SEQUENCE', 'PREDICTED SEQUENCE', 'LOSS')
print(footer)
# print statistics
batch_index = start_index_in_epoch // (batch_size * sequence_len)
batch_string = "batch {}/{} in epoch {},".format(batch_index, epoch_size, epoch)
stats = "{: <28} batch loss: {:.5f}, batch accuracy: {:.5f}".format(batch_string, batch_loss, batch_accuracy)
print()
print("TRAINING STATS: {}".format(stats))
class Progress:
"""Text mode progress bar.
Usage:
p = Progress(30)
p.step()
p.step()
p.step(start=True) # to restart form 0%
The progress bar displays a new header at each restart."""
def __init__(self, maxi, size=100, msg=""):
"""
:param maxi: the number of steps required to reach 100%
:param size: the number of characters taken on the screen by the progress bar
:param msg: the message displayed in the header of the progress bat
"""
self.maxi = maxi
self.p = self.__start_progress(maxi)() # () to get the iterator from the generator
self.header_printed = False
self.msg = msg
self.size = size
def step(self, reset=False):
if reset:
self.__init__(self.maxi, self.size, self.msg)
if not self.header_printed:
self.__print_header()
next(self.p)
def __print_header(self):
print()
format_string = "0%{: ^" + str(self.size - 6) + "}100%"
print(format_string.format(self.msg))
self.header_printed = True
def __start_progress(self, maxi):
def print_progress():
# Bresenham's algorithm. Yields the number of dots printed.
# This will always print 100 dots in max invocations.
dx = maxi
dy = self.size
d = dy - dx
for x in range(maxi):
k = 0
while d >= 0:
print('=', end="", flush=True)
k += 1
d -= dx
d += dy
yield k
return print_progress
def read_data_files(directory, validation=True):
"""Read data files according to the specified glob pattern
Optionnaly set aside the last file as validation data.
No validation data is returned if there are 5 files or less.
:param directory: for example "data/*.txt"
:param validation: if True (default), sets the last file aside as validation data
:return: training data, validation data, list of loaded file names with ranges
If validation is
"""
codetext = []
bookranges = []
shakelist = glob.glob(directory, recursive=True)
for shakefile in shakelist:
shaketext = open(shakefile, "r")
print("Loading file " + shakefile)
start = len(codetext)
codetext.extend(encode_text(shaketext.read()))
end = len(codetext)
bookranges.append({"start": start, "end": end, "name": shakefile.rsplit("/", 1)[-1]})
shaketext.close()
if len(bookranges) == 0:
sys.exit("No training data has been found. Aborting.")
# For validation, use roughly 90K of text,
# but no more than 10% of the entire text
# and no more than 1 book in 5 => no validation at all for 5 files or fewer.
# 10% of the text is how many files ?
total_len = len(codetext)
validation_len = 0
nb_books1 = 0
for book in reversed(bookranges):
validation_len += book["end"]-book["start"]
nb_books1 += 1
if validation_len > total_len // 10:
break
# 90K of text is how many books ?
validation_len = 0
nb_books2 = 0
for book in reversed(bookranges):
validation_len += book["end"]-book["start"]
nb_books2 += 1
if validation_len > 90*1024:
break
# 20% of the books is how many books ?
nb_books3 = len(bookranges) // 5
# pick the smallest
nb_books = min(nb_books1, nb_books2, nb_books3)
if nb_books == 0 or not validation:
cutoff = len(codetext)
else:
cutoff = bookranges[-nb_books]["start"]
valitext = codetext[cutoff:]
codetext = codetext[:cutoff]
return codetext, valitext, bookranges
def print_data_stats(datalen, valilen, epoch_size):
datalen_mb = datalen/1024.0/1024.0
valilen_kb = valilen/1024.0
print("Training text size is {:.2f}MB with {:.2f}KB set aside for validation.".format(datalen_mb, valilen_kb)
+ " There will be {} batches per epoch".format(epoch_size))
def print_validation_header(validation_start, bookranges):
bookindex = find_book_index(validation_start, bookranges)
books = ''
for i in range(bookindex, len(bookranges)):
books += bookranges[i]["name"]
if i < len(bookranges)-1:
books += ", "
print("{: <60}".format("Validating on " + books), flush=True)
def print_validation_stats(loss, accuracy):
print("VALIDATION STATS: loss: {:.5f}, accuracy: {:.5f}".format(loss,
accuracy))
def print_text_generation_header():
print()
print("┌{:─^111}┐".format('Generating random text from learned state'))
def print_text_generation_footer():
print()
print("└{:─^111}┘".format('End of generation'))
def frequency_limiter(n, multiple=1, modulo=0):
def limit(i):
return i % (multiple * n) == modulo*multiple
return limit
###Output
_____no_output_____
###Markdown
Training the Model
###Code
# encoding: UTF-8
# Copyright 2017 Google.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib.util
import tensorflow as tf
from tensorflow.contrib import layers
from tensorflow.contrib import rnn # rnn stuff temporarily in contrib, moving back to code in TF 1.1
import os
import time
import math
import numpy as np
tf.set_random_seed(0)
# model parameters
#
# Usage:
# Training only:
# Leave all the parameters as they are
# Disable validation to run a bit faster (set validation=False below)
# You can follow progress in Tensorboard: tensorboard --log-dir=log
# Training and experimentation (default):
# Keep validation enabled
# You can now play with the parameters anf follow the effects in Tensorboard
# A good choice of parameters ensures that the testing and validation curves stay close
# To see the curves drift apart ("overfitting") try to use an insufficient amount of
# training data (shakedir = "shakespeare/t*.txt" for example)
#
SEQLEN = 30
BATCHSIZE = 200
ALPHASIZE = ALPHASIZE
INTERNALSIZE = 512
NLAYERS = 3
learning_rate = 0.001 # fixed learning rate
dropout_pkeep = 0.8 # some dropout
# load data, either shakespeare, or the Python source of Tensorflow itself
default_fp = 'drive/Colab Notebooks/ex03/'
shakedir = default_fp + "/shakespeare/*.txt"
#shakedir = "../tensorflow/**/*.py"
codetext, valitext, bookranges = read_data_files(shakedir, validation=True)
# display some stats on the data
epoch_size = len(codetext) // (BATCHSIZE * SEQLEN)
print_data_stats(len(codetext), len(valitext), epoch_size)
#
# the model (see FAQ in README.md)
#
lr = tf.placeholder(tf.float32, name='lr') # learning rate
pkeep = tf.placeholder(tf.float32, name='pkeep') # dropout parameter
batchsize = tf.placeholder(tf.int32, name='batchsize')
# inputs
X = tf.placeholder(tf.uint8, [None, None], name='X') # [ BATCHSIZE, SEQLEN ]
Xo = tf.one_hot(X, ALPHASIZE, 1.0, 0.0) # [ BATCHSIZE, SEQLEN, ALPHASIZE ]
# expected outputs = same sequence shifted by 1 since we are trying to predict the next character
Y_ = tf.placeholder(tf.uint8, [None, None], name='Y_') # [ BATCHSIZE, SEQLEN ]
Yo_ = tf.one_hot(Y_, ALPHASIZE, 1.0, 0.0) # [ BATCHSIZE, SEQLEN, ALPHASIZE ]
# input state
Hin = tf.placeholder(tf.float32, [None, INTERNALSIZE*NLAYERS], name='Hin') # [ BATCHSIZE, INTERNALSIZE * NLAYERS]
# using a NLAYERS=3 layers of GRU cells, unrolled SEQLEN=30 times
# dynamic_rnn infers SEQLEN from the size of the inputs Xo
# How to properly apply dropout in RNNs: see README.md
cells = [rnn.GRUCell(INTERNALSIZE) for _ in range(NLAYERS)]
# "naive dropout" implementation
dropcells = [rnn.DropoutWrapper(cell,input_keep_prob=pkeep) for cell in cells]
multicell = rnn.MultiRNNCell(dropcells, state_is_tuple=False)
multicell = rnn.DropoutWrapper(multicell, output_keep_prob=pkeep) # dropout for the softmax layer
Yr, H = tf.nn.dynamic_rnn(multicell, Xo, dtype=tf.float32, initial_state=Hin)
# Yr: [ BATCHSIZE, SEQLEN, INTERNALSIZE ]
# H: [ BATCHSIZE, INTERNALSIZE*NLAYERS ] # this is the last state in the sequence
H = tf.identity(H, name='H') # just to give it a name
# Softmax layer implementation:
# Flatten the first two dimension of the output [ BATCHSIZE, SEQLEN, ALPHASIZE ] => [ BATCHSIZE x SEQLEN, ALPHASIZE ]
# then apply softmax readout layer. This way, the weights and biases are shared across unrolled time steps.
# From the readout point of view, a value coming from a sequence time step or a minibatch item is the same thing.
Yflat = tf.reshape(Yr, [-1, INTERNALSIZE]) # [ BATCHSIZE x SEQLEN, INTERNALSIZE ]
Ylogits = layers.linear(Yflat, ALPHASIZE) # [ BATCHSIZE x SEQLEN, ALPHASIZE ]
Yflat_ = tf.reshape(Yo_, [-1, ALPHASIZE]) # [ BATCHSIZE x SEQLEN, ALPHASIZE ]
loss = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Yflat_) # [ BATCHSIZE x SEQLEN ]
loss = tf.reshape(loss, [batchsize, -1]) # [ BATCHSIZE, SEQLEN ]
Yo = tf.nn.softmax(Ylogits, name='Yo') # [ BATCHSIZE x SEQLEN, ALPHASIZE ]
Y = tf.argmax(Yo, 1) # [ BATCHSIZE x SEQLEN ]
Y = tf.reshape(Y, [batchsize, -1], name="Y") # [ BATCHSIZE, SEQLEN ]
# choose the optimizer
# train_step = tf.train.AdamOptimizer(lr)
train_step = tf.train.RMSPropOptimizer(lr, momentum=0.9)
# train_step = tf.train.GradientDescentOptimizer(lr)
name = train_step.get_name()
train_step = train_step.minimize(loss)
# stats for display
seqloss = tf.reduce_mean(loss, 1)
batchloss = tf.reduce_mean(seqloss)
accuracy = tf.reduce_mean(tf.cast(tf.equal(Y_, tf.cast(Y, tf.uint8)), tf.float32))
loss_summary = tf.summary.scalar("batch_loss", batchloss)
acc_summary = tf.summary.scalar("batch_accuracy", accuracy)
summaries = tf.summary.merge([loss_summary, acc_summary])
# Init Tensorboard stuff. This will save Tensorboard information into a different
# folder at each run named 'log/<timestamp>/'. Two sets of data are saved so that
# you can compare training and validation curves visually in Tensorboard.
timestamp = str(math.trunc(time.time()))
summary_writer = tf.summary.FileWriter(default_fp + "/log/" + timestamp + '-{}'.format(name) + "-training", flush_secs=15)
validation_writer = tf.summary.FileWriter(default_fp + "/log/" + timestamp + '-{}'.format(name) + "-validation", flush_secs=15)
# Init for saving models. They will be saved into a directory named 'checkpoints'.
# Only the last checkpoint is kept.
if not os.path.exists("checkpoints"):
os.mkdir("checkpoints")
saver = tf.train.Saver(max_to_keep=1000)
# for display: init the progress bar
DISPLAY_FREQ = 50
_50_BATCHES = DISPLAY_FREQ * BATCHSIZE * SEQLEN
progress = Progress(DISPLAY_FREQ, size=111+2, msg="Training on next "+str(DISPLAY_FREQ)+" batches")
# init
istate = np.zeros([BATCHSIZE, INTERNALSIZE*NLAYERS]) # initial zero input state
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
step = 0
# training loop
for x, y_, epoch in rnn_minibatch_sequencer(codetext, BATCHSIZE, SEQLEN, nb_epochs=4):
# train on one minibatch
feed_dict = {X: x, Y_: y_, Hin: istate, lr: learning_rate, pkeep: dropout_pkeep, batchsize: BATCHSIZE}
_, y, ostate = sess.run([train_step, Y, H], feed_dict=feed_dict)
# log training data for Tensorboard display a mini-batch of sequences (every 50 batches)
if step % _50_BATCHES == 0:
feed_dict = {X: x, Y_: y_, Hin: istate, pkeep: 1.0, batchsize: BATCHSIZE} # no dropout for validation
y, l, bl, acc, smm = sess.run([Y, seqloss, batchloss, accuracy, summaries], feed_dict=feed_dict)
print_learning_learned_comparison(x, y, l, bookranges, bl, acc, epoch_size, step, epoch)
summary_writer.add_summary(smm, step)
summary_writer.flush()
# run a validation step every 50 batches
# The validation text should be a single sequence but that's too slow (1s per 1024 chars!),
# so we cut it up and batch the pieces (slightly inaccurate)
# tested: validating with 5K sequences instead of 1K is only slightly more accurate, but a lot slower.
if step % _50_BATCHES == 0 and len(valitext) > 0:
VALI_SEQLEN = 1*1024 # Sequence length for validation. State will be wrong at the start of each sequence.
bsize = len(valitext) // VALI_SEQLEN
print_validation_header(len(codetext), bookranges)
vali_x, vali_y, _ = next(rnn_minibatch_sequencer(valitext, bsize, VALI_SEQLEN, 1)) # all data in 1 batch
vali_nullstate = np.zeros([bsize, INTERNALSIZE*NLAYERS])
feed_dict = {X: vali_x, Y_: vali_y, Hin: vali_nullstate, pkeep: 1.0, # no dropout for validation
batchsize: bsize}
ls, acc, smm = sess.run([batchloss, accuracy, summaries], feed_dict=feed_dict)
print_validation_stats(ls, acc)
# save validation data for Tensorboard
validation_writer.add_summary(smm, step)
validation_writer.flush()
# # display a short text generated with the current weights and biases (every 500 batches)
# if step // 10 % _50_BATCHES == 0:
# print_text_generation_header()
# ry = np.array([[convert_from_alphabet(ord("K"))]])
# rh = np.zeros([1, INTERNALSIZE * NLAYERS])
# for k in range(1000):
# ryo, rh = sess.run([Yo, H], feed_dict={X: ry, pkeep: 1.0, Hin: rh, batchsize: 1})
# rc = sample_from_probabilities(ryo, topn=10 if epoch <= 1 else 2)
# print(chr(convert_to_alphabet(rc)), end="")
# ry = np.array([[rc]])
# print_text_generation_footer()
# # save a checkpoint (every 500 batches)
# if step // 10 % _50_BATCHES == 0:
# saved_file = saver.save(sess, default_fp + '/checkpoints/rnn_train_' + timestamp, global_step=step)
# print("Saved file: " + saved_file)
# display progress bar
progress.step(reset=step % _50_BATCHES == 0)
# loop state around
istate = ostate
step += BATCHSIZE * SEQLEN
# all runs: SEQLEN = 30, BATCHSIZE = 100, ALPHASIZE = 98, INTERNALSIZE = 512, NLAYERS = 3
# run 1477669632 decaying learning rate 0.001-0.0001-1e7 dropout 0.5: not good
# run 1477670023 lr=0.001 no dropout: very good
# Tensorflow runs:
# 1485434262
# trained on shakespeare/t*.txt only. Validation on 1K sequences
# validation loss goes up from step 5M (overfitting because of small dataset)
# 1485436038
# trained on shakespeare/t*.txt only. Validation on 5K sequences
# On 5K sequences validation accuracy is slightly higher and loss slightly lower
# => sequence breaks do introduce inaccuracies but the effect is small
# 1485437956
# Trained on shakespeare/*.txt. Validation on 1K sequences
# On this much larger dataset, validation loss still decreasing after 6 epochs (step 35M)
# 1495447371
# Trained on shakespeare/*.txt no dropout, 30 epochs
# Validation loss starts going up after 10 epochs (overfitting)
# 1495440473
# Trained on shakespeare/*.txt "naive dropout" pkeep=0.8, 30 epochs
# Dropout brings the validation loss under control, preventing it from
# going up but the effect is small.
###Output
_____no_output_____ |
day_3_simple_model.ipynb | ###Markdown
Data loading
###Code
df = pd.read_hdf('data/car.h5')
df.shape
df.columns
df.select_dtypes(np.number).columns
###Output
_____no_output_____
###Markdown
Dummy model
###Code
feats = ['car_id']
X = df[feats].values
y = df['price_value'].values
model = DummyRegressor().fit(X,y)
y_pred = model.predict(X)
mae(y, y_pred)
[x for x in df.columns if "price" in x]
df.price_currency.value_counts()
df = df[df.price_currency != 'EUR']
df.shape
###Output
_____no_output_____
###Markdown
Features
###Code
[print(x) for x in df.columns]
df.param_color.factorize()[0]
SUFFIX_CAT = '_cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_vals = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_vals
else:
df[feat + SUFFIX_CAT] = df[feat].factorize()[0]
cat_feats = [x for x in df.columns if SUFFIX_CAT in x]
len(cat_feats)
cat_feats = [x for x in cat_feats if 'price' not in x]
len(cat_feats)
X = df[cat_feats].values
y = df.price_value.values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, cv =3,scoring='neg_mean_absolute_error')
np.mean(scores)
m = DecisionTreeRegressor(max_depth=5).fit(X,y)
imp = PermutationImportance(m, random_state =0).fit(X,y)
eli5.show_weights(imp, feature_names=cat_feats)
###Output
_____no_output_____
###Markdown
**Dummy Model**
###Code
df.select_dtypes(np.number).columns
feats = ['car_id']
X = df[ feats ].values
y = df[ 'price_value' ].values
model = DummyRegressor()
model.fit(X, y)
y_pred = model.predict(X)
mae(y, y_pred)
[x for x in df.columns if 'price' in x]
df['price_currency'].value_counts()
df['price_currency'].value_counts(normalize=True)
df = df [df ['price_currency'] != 'EUR']
df.shape
###Output
_____no_output_____
###Markdown
**Features**
###Code
df.head()
for feat in df.columns:
print(feat)
df['param_color'].factorize()
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list):continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = df[feat].factorize()[0]
cat_feats = [x for x in df.columns if SUFFIX_CAT in x ]
cat_feats = [x for x in cat_feats if 'price' not in x ]
len(cat_feats)
x = df[cat_feats].values
y = df['price_value'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, cv=3, scoring='neg_mean_absolute_error')
np.mean(scores)
m = DecisionTreeRegressor(max_depth=5)
m.fit(x, y)
imp = PermutationImportance(m, random_state=0).fit(x, y)
eli5.show_weights(imp, feature_names=cat_feats)
###Output
_____no_output_____
###Markdown
READ DATA
###Code
cd '/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_carprice'
df = pd.read_hdf('data/car.h5')
df.shape
df.columns
###Output
_____no_output_____
###Markdown
Dummmy Model
###Code
feats = ['car_id']
X = df[feats].values
y = df['price_value'].values
model = DummyRegressor()
model.fit(X, y)
y_pred = model.predict(X)
mae(y, y_pred)
df['price_currency'].value_counts()
df = df[df['price_currency'] != 'EUR']
df['price_currency'].value_counts()
###Output
_____no_output_____
###Markdown
FEATRUES
###Code
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = df[feat].factorize()[0]
cat_feats = [ x for x in df.columns if SUFFIX_CAT in x]
cat_feats = [ x for x in cat_feats if 'price' not in x]
len(cat_feats)
X = df[cat_feats].values
y = df['price_value'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
np.mean(scores)
m = DecisionTreeRegressor(max_depth=5)
m.fit(X, y)
importances = PermutationImportance(m, random_state=2020).fit(X, y)
eli5.show_weights(importances, feature_names=cat_feats)
###Output
_____no_output_____
###Markdown
###Code
## Cechy/Features
df.head()
df['param_color'].factorize()
SUFFIX_CAT = '__cat'
for feat in df.columns:
# print(feat)
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
#df[feat + SUFFIX_CAT] = df[feat].factorize()[0]
#1
'a'
'a__cat'
#2
'a'
'a__cat'
'a__cat__cat'
cat_feats = [x for x in df.columns if SUFFIX_CAT in x]
cat_feats = [x for x in cat_feats if 'price' not in x]
len(cat_feats)
X = df[cat_feats].values
y = df['price_value'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
np.mean(scores)
m = DecisionTreeRegressor(max_depth=5) #nowy model
m.fit(X, y)
imp = PermutationImportance(m).fit(X, y)
eli5.show_weights(imp, feature_names=cat_feats) # szukamy najważniejszych cech
###Output
_____no_output_____
###Markdown
Wczytywanie danych
###Code
df = pd.read_hdf('data/car.h5')
df.shape
df.columns
###Output
_____no_output_____
###Markdown
Dummy Model
###Code
df.select_dtypes(np.number).columns
feats = ['car_id']
X = df[ feats ].values
y = df['price_value'].values
model = DummyRegressor()
model.fit(X, y)
y_pred = model.predict(X)
mae(y, y_pred)
[x for x in df.columns if 'price' in x]
df['price_currency'].value_counts()
df['price_currency'].value_counts(normalize=True) * 100
df = df[ df['price_currency'] != 'EUR' ]
df.shape
###Output
_____no_output_____
###Markdown
Features
###Code
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else :
df[feat + SUFFIX_CAT] = factorized_values
cat_feats = [x for x in df.columns if SUFFIX_CAT in x ]
cat_feats = [x for x in cat_feats if 'price' not in x ]
len(cat_feats)
X = df[cat_feats].values
y = df['price_value'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
np.mean(scores)
m = DecisionTreeRegressor(max_depth=5)
m.fit(X, y)
imp = PermutationImportance(m, random_state=0).fit(X, y)
eli5.show_weights(imp, feature_names=cat_feats)
###Output
_____no_output_____ |
Sequences, Time Series and Prediction/Week 3 Recurrent Neural Networks for Time Series/S+P_Week_3_Exercise_Question.ipynb | ###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tf-nightly-2.0-preview
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(False)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.1,
np.cos(season_time * 6 * np.pi),
2 / np.exp(9 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(10 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.005
noise_level = 3
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=51)
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
plot_series(time, series)
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x : tf.expand_dims(x, axis=-1), input_shape=[None]),# YOUR CODE HERE
# YOUR CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32,return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32,return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x : x * 100.0)# YOUR CODE HERE
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule],verbose=0)
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
# FROM THIS PICK A LEARNING RATE
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x : tf.expand_dims(x, axis=-1), input_shape=[None]),# YOUR CODE HERE
# YOUR CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32,return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32,return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x : x * 100.0)# YOUR CODE HERE
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9),metrics=["mae"])# PUT YOUR LEARNING RATE HERE#
history = model.fit(dataset,epochs=500,verbose=1)
# FIND A MODEL AND A LR THAT TRAINS TO AN MAE < 3
forecast = []
results = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
# YOUR RESULT HERE SHOULD BE LESS THAN 4
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
###Output
_____no_output_____ |
Deep Q_Network/q_learning_2015_priotized_experience/.ipynb_checkpoints/Deep_Q_Network-checkpoint.ipynb | ###Markdown
Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages
###Code
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
2. Instantiate the Environment and AgentInitialize the environment in the code cell below.
###Code
env = gym.make('LunarLander-v2')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
###Output
/home/oxygen/anaconda3/lib/python3.7/site-packages/gym/logger.py:30: UserWarning: [33mWARN: Box bound precision lowered by casting to float32[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
###Markdown
Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._)
###Code
#just for rendering
for _ in range(3):
state = env.reset()
while True:
env.render()
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
state = next_state
if done:
state = env.reset()
break
env.close()
from dqn_agent import Agent
agent = Agent(state_size=8, action_size=4, seed=0)
# watch an untrained agent
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
###Output
_____no_output_____
###Markdown
3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance!
###Code
from dqn_agent import Agent
agent = Agent(state_size=8, action_size=4, seed=0)
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
state = env.reset()
score = 0
for t in range(max_t):
action = agent.act(state, eps)
next_state, reward, done, _ = env.step(action)
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=200.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
break
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
###Output
_____no_output_____
###Markdown
4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent!
###Code
env.close()
# load the weights from file
#agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth'))
import time
import gym.wrappers as wrappers
env = wrappers.Monitor(env, './data', force = True)
done = 1
for i in range(5):
if done:
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
time.sleep(0.05)
if done:
break
env.close()
###Output
_____no_output_____ |
mavenn/development/21.11.07_revision_development/21.11.07_check_mavenn_for_single_mutants_only_training_using_gb1.ipynb | ###Markdown
This notebook checks mavenn's new feature where if user supplies only single mutants, then mavenn requires that ge_nonlinearity_type == 'linear', and gpmap_type == 'additive', and ge_noise_model_type == 'Gaussian'. Errors are thrown when set_data is called.
###Code
# Standard imports
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import re
import seaborn as sns
import time
import tensorflow as tf
from tensorflow.keras.backend import get_value
%matplotlib inline
# Insert mavenn at beginning of path
import sys
path_to_mavenn_local = '/Users/tareen/Desktop/Research_Projects/2020_mavenn_github/mavenn_git_ssh/'
sys.path.insert(0, path_to_mavenn_local)
#Load mavenn and check path
import mavenn
print(mavenn.__path__)
###Output
['/Users/tareen/Desktop/Research_Projects/2020_mavenn_github/mavenn_git_ssh/mavenn']
###Markdown
Get GB1 single mutant data to test out single mutant training feature
###Code
def get_gb1_single_mutants_df_from_ambler_counts_data():
oslon_single_mutant_positions_data = pd.read_csv('oslon_data_single_mutants_ambler.csv',na_values="nan")
WT_seq = 'QYKLILNGKTLKGETTTEAVDAATAEKVFKQYANDNGVDGEWTYDDATKTFTVTE'
#len(WT_seq)
WT_input_count = 1759616
WT_selection_count = 3041819
sequences = []
fitness = []
for loop_index in range(len(oslon_single_mutant_positions_data)):
mut_index = int(oslon_single_mutant_positions_data['Position'][loop_index])-2
mut = oslon_single_mutant_positions_data['Mutation'][loop_index]
temp_seq = list(WT_seq)
temp_seq[mut_index] = mut
# append sequence
sequences.append(''.join(temp_seq))
# calculate fitness for sequence
input_count = oslon_single_mutant_positions_data['Input Count'][loop_index]
selection_count = oslon_single_mutant_positions_data['Selection Count'][loop_index]
# added 1 to ensure log doesn't throw up
temp_fitness = np.log2(((selection_count+1)/input_count)/(WT_selection_count/WT_input_count))
fitness.append(temp_fitness)
gb1_single_mutants_df = pd.DataFrame({'x':sequences,'y':fitness})
return gb1_single_mutants_df
gb1_single_mutants_df = pd.read_csv('gb1_single_mutants_data.csv',index_col=0)
gb1_single_mutants_df
# Get length of sequences
L = len(gb1_single_mutants_df['x'][0])
# Define model
model = mavenn.Model(regression_type='GE',
L=L,
alphabet='protein',
gpmap_type='additive',
ge_noise_model_type='Gaussian',
ge_nonlinearity_type='linear',
ge_heteroskedasticity_order=0)
# Set training data
model.set_data(x=gb1_single_mutants_df['x'],
y=gb1_single_mutants_df['y'],
validation_frac=0,
shuffle=False)
# write utils method to check if dataset contains only single mutants
consensus_seq = model.x_stats['consensus_seq']
model.x_stats['only_single_mutants']
# Fit model to data
history = model.fit(learning_rate=.00005,
epochs=10,
batch_size=100,
early_stopping=True,
early_stopping_patience=5,
linear_initialization=True)
x_train = gb1_single_mutants_df['x']
y_train = gb1_single_mutants_df['y']
# Show training history
print('On test data:')
# Compute likelihood information
I_var, dI_var = model.I_variational(x=x_train, y=y_train)
print(f'I_var_train: {I_var:.3f} +- {dI_var:.3f} bits')
# Compute predictive information
I_pred, dI_pred = model.I_predictive(x=x_train, y=y_train)
print(f'I_pred_train: {I_pred:.3f} +- {dI_pred:.3f} bits')
I_var_hist = model.history['I_var']
#val_I_var_hist = model.history['val_I_var']
fig, ax = plt.subplots(1,1,figsize=[4,4])
ax.plot(I_var_hist, label='I_var_train')
#ax.plot(val_I_var_hist, label='I_var_val')
ax.axhline(I_var, color='C2', linestyle=':', label='I_var_test')
ax.axhline(I_pred, color='C3', linestyle=':', label='I_pred_test')
ax.legend()
ax.set_xlabel('epochs')
ax.set_ylabel('bits')
ax.set_title('training hisotry')
# Compute phi and yhat values
phi = model.x_to_phi(x_train)
yhat = model.phi_to_yhat(phi)
# Create grid for plotting yhat and yqs
phi_lim = [-5, 2.5]
phi_grid = np.linspace(phi_lim[0], phi_lim[1], 1000)
yhat_grid = model.phi_to_yhat(phi_grid)
yqs_grid = model.yhat_to_yq(yhat_grid, q=[.16,.84])
# Create two panels
fig, ax = plt.subplots(1, 1, figsize=[4, 4])
# Illustrate measurement process with GE curve
ax.scatter(phi, y_train, color='C0', s=5, alpha=.2, label='test data')
ax.plot(phi_grid, yhat_grid, linewidth=2, color='C1',
label='$\hat{y} = g(\phi)$')
ax.plot(phi_grid, yqs_grid[:, 0], linestyle='--', color='C1',
label='68% CI')
ax.plot(phi_grid, yqs_grid[:, 1], linestyle='--', color='C1')
ax.set_xlim(phi_lim)
ax.set_xlabel('latent phenotype ($\phi$)')
ax.set_ylabel('measurement ($y$)')
ax.set_title('measurement process')
ax.legend()
# Fix up plot
fig.tight_layout()
plt.show()
###Output
_____no_output_____ |
linked_lists/palindrome/palindrome_solution.ipynb | ###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Determine if a linked list is a palindrome.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume this is a non-circular, singly linked list? * Yes* Is a single character or number a palindrome? * No* Can we assume we already have a linked list class that can be used for this problem? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* Empty list -> False* Single element list -> False* Two or more element list, not a palindrome -> False* General case: Palindrome with even length -> True* General case: Palindrome with odd length -> True Algorithm* Reverse the linked list * Iterate through the current linked list * Insert to front the current node into a new linked list* Compare the reversed list with the original list * Only need to compare the first halfComplexity:* Time: O(n)* Space: O(n)Note:* We could also do this iteratively, using a stack to effectively reverse the first half of the string. Code
###Code
%run ../linked_list/linked_list.py
from __future__ import division
class MyLinkedList(LinkedList):
def is_palindrome(self):
if self.head is None or self.head.next is None:
return False
curr = self.head
reversed_list = MyLinkedList()
length = 0
# Reverse the linked list
while curr is not None:
reversed_list.insert_to_front(curr.data)
length += 1
curr = curr.next
# Compare the reversed list with the original list
# Only need to compare the first half
iterations = length // 2
curr = self.head
curr_reversed = reversed_list.head
for _ in range(iterations):
if curr.data != curr_reversed.data:
return False
curr = curr.next
curr_reversed = curr_reversed.next
return True
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_palindrome.py
import unittest
class TestPalindrome(unittest.TestCase):
def test_palindrome(self):
print('Test: Empty list')
linked_list = MyLinkedList()
self.assertEqual(linked_list.is_palindrome(), False)
print('Test: Single element list')
head = Node(1)
linked_list = MyLinkedList(head)
self.assertEqual(linked_list.is_palindrome(), False)
print('Test: Two element list, not a palindrome')
linked_list.append(2)
self.assertEqual(linked_list.is_palindrome(), False)
print('Test: General case: Palindrome with even length')
head = Node('a')
linked_list = MyLinkedList(head)
linked_list.append('b')
linked_list.append('b')
linked_list.append('a')
self.assertEqual(linked_list.is_palindrome(), True)
print('Test: General case: Palindrome with odd length')
head = Node(1)
linked_list = MyLinkedList(head)
linked_list.append(2)
linked_list.append(3)
linked_list.append(2)
linked_list.append(1)
self.assertEqual(linked_list.is_palindrome(), True)
print('Success: test_palindrome')
def main():
test = TestPalindrome()
test.test_palindrome()
if __name__ == '__main__':
main()
%run -i test_palindrome.py
###Output
Test: Empty list
Test: Single element list
Test: Two element list, not a palindrome
Test: General case: Palindrome with even length
Test: General case: Palindrome with odd length
Success: test_palindrome
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Determine if a linked list is a palindrome.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Is a single character or number a palindrome? * No* Can we assume we already have a linked list class that can be used for this problem? * Yes Test Cases* Empty list -> False* Single element list -> False* Two or more element list, not a palindrome -> False* General case: Palindrome with even length -> True* General case: Palindrome with odd length -> True Algorithm* Reverse the linked list * Iterate through the current linked list * Insert to front the current node into a new linked list* Compare the reversed list with the original list * Only need to compare the first halfComplexity:* Time: O(n)* Space: O(n)Note:* We could also do this iteratively, using a stack to effectively reverse the first half of the string. Code
###Code
%run ../linked_list/linked_list.py
from __future__ import division
class MyLinkedList(LinkedList):
def is_palindrome(self):
if self.head is None or self.head.next is None:
return False
curr = self.head
reversed_list = MyLinkedList()
length = 0
# Reverse the linked list
while curr is not None:
reversed_list.insert_to_front(curr.data)
length += 1
curr = curr.next
# Compare the reversed list with the original list
# Only need to compare the first half
iterations_to_compare_half = length // 2
curr = self.head
curr_reversed = reversed_list.head
for _ in range(0, iterations_to_compare_half):
if curr.data != curr_reversed.data:
return False
curr = curr.next
curr_reversed = curr_reversed.next
return True
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_palindrome.py
from nose.tools import assert_equal
class TestPalindrome(object):
def test_palindrome(self):
print('Test: Empty list')
linked_list = MyLinkedList()
assert_equal(linked_list.is_palindrome(), False)
print('Test: Single element list')
head = Node(1)
linked_list = MyLinkedList(head)
assert_equal(linked_list.is_palindrome(), False)
print('Test: Two element list, not a palindrome')
linked_list.append(2)
assert_equal(linked_list.is_palindrome(), False)
print('Test: General case: Palindrome with even length')
head = Node('a')
linked_list = MyLinkedList(head)
linked_list.append('b')
linked_list.append('b')
linked_list.append('a')
assert_equal(linked_list.is_palindrome(), True)
print('Test: General case: Palindrome with odd length')
head = Node(1)
linked_list = MyLinkedList(head)
linked_list.append(2)
linked_list.append(3)
linked_list.append(2)
linked_list.append(1)
assert_equal(linked_list.is_palindrome(), True)
print('Success: test_palindrome')
def main():
test = TestPalindrome()
test.test_palindrome()
if __name__ == '__main__':
main()
%run -i test_palindrome.py
###Output
Test: Empty list
Test: Single element list
Test: Two element list, not a palindrome
Test: General case: Palindrome with even length
Test: General case: Palindrome with odd length
Success: test_palindrome
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Determine if a linked list is a palindrome.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume this is a non-circular, singly linked list? * Yes* Is a single character or number a palindrome? * No* Can we assume we already have a linked list class that can be used for this problem? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* Empty list -> False* Single element list -> False* Two or more element list, not a palindrome -> False* General case: Palindrome with even length -> True* General case: Palindrome with odd length -> True Algorithm* Reverse the linked list * Iterate through the current linked list * Insert to front the current node into a new linked list* Compare the reversed list with the original list * Only need to compare the first halfComplexity:* Time: O(n)* Space: O(n)Note:* We could also do this iteratively, using a stack to effectively reverse the first half of the string. Code
###Code
%run ../linked_list/linked_list.py
from __future__ import division
class MyLinkedList(LinkedList):
def is_palindrome(self):
if self.head is None or self.head.next is None:
return False
curr = self.head
reversed_list = MyLinkedList()
length = 0
# Reverse the linked list
while curr is not None:
reversed_list.insert_to_front(curr.data)
length += 1
curr = curr.next
# Compare the reversed list with the original list
# Only need to compare the first half
iterations = length // 2
curr = self.head
curr_reversed = reversed_list.head
for _ in range(iterations):
if curr.data != curr_reversed.data:
return False
curr = curr.next
curr_reversed = curr_reversed.next
return True
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_palindrome.py
from nose.tools import assert_equal
class TestPalindrome(object):
def test_palindrome(self):
print('Test: Empty list')
linked_list = MyLinkedList()
assert_equal(linked_list.is_palindrome(), False)
print('Test: Single element list')
head = Node(1)
linked_list = MyLinkedList(head)
assert_equal(linked_list.is_palindrome(), False)
print('Test: Two element list, not a palindrome')
linked_list.append(2)
assert_equal(linked_list.is_palindrome(), False)
print('Test: General case: Palindrome with even length')
head = Node('a')
linked_list = MyLinkedList(head)
linked_list.append('b')
linked_list.append('b')
linked_list.append('a')
assert_equal(linked_list.is_palindrome(), True)
print('Test: General case: Palindrome with odd length')
head = Node(1)
linked_list = MyLinkedList(head)
linked_list.append(2)
linked_list.append(3)
linked_list.append(2)
linked_list.append(1)
assert_equal(linked_list.is_palindrome(), True)
print('Success: test_palindrome')
def main():
test = TestPalindrome()
test.test_palindrome()
if __name__ == '__main__':
main()
%run -i test_palindrome.py
###Output
Test: Empty list
Test: Single element list
Test: Two element list, not a palindrome
Test: General case: Palindrome with even length
Test: General case: Palindrome with odd length
Success: test_palindrome
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Determine if a linked list is a palindrome.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume this is a non-circular, singly linked list? * Yes* Is a single character or number a palindrome? * No* Can we assume we already have a linked list class that can be used for this problem? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* Empty list -> False* Single element list -> False* Two or more element list, not a palindrome -> False* General case: Palindrome with even length -> True* General case: Palindrome with odd length -> True Algorithm* Reverse the linked list * Iterate through the current linked list * Insert to front the current node into a new linked list* Compare the reversed list with the original list * Only need to compare the first halfComplexity:* Time: O(n)* Space: O(n)Note:* We could also do this iteratively, using a stack to effectively reverse the first half of the string. Code
###Code
%run ../linked_list/linked_list.py
from __future__ import division
class MyLinkedList(LinkedList):
def is_palindrome(self):
if self.head is None or self.head.next is None:
return False
curr = self.head
reversed_list = MyLinkedList()
length = 0
# Reverse the linked list
while curr is not None:
reversed_list.insert_to_front(curr.data)
length += 1
curr = curr.next
# Compare the reversed list with the original list
# Only need to compare the first half
iterations = length // 2
curr = self.head
curr_reversed = reversed_list.head
for _ in range(iterations):
if curr.data != curr_reversed.data:
return False
curr = curr.next
curr_reversed = curr_reversed.next
return True
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_palindrome.py
from nose.tools import assert_equal
class TestPalindrome(object):
def test_palindrome(self):
print('Test: Empty list')
linked_list = MyLinkedList()
assert_equal(linked_list.is_palindrome(), False)
print('Test: Single element list')
head = Node(1)
linked_list = MyLinkedList(head)
assert_equal(linked_list.is_palindrome(), False)
print('Test: Two element list, not a palindrome')
linked_list.append(2)
assert_equal(linked_list.is_palindrome(), False)
print('Test: General case: Palindrome with even length')
head = Node('a')
linked_list = MyLinkedList(head)
linked_list.append('b')
linked_list.append('b')
linked_list.append('a')
assert_equal(linked_list.is_palindrome(), True)
print('Test: General case: Palindrome with odd length')
head = Node(1)
linked_list = MyLinkedList(head)
linked_list.append(2)
linked_list.append(3)
linked_list.append(2)
linked_list.append(1)
assert_equal(linked_list.is_palindrome(), True)
print('Success: test_palindrome')
def main():
test = TestPalindrome()
test.test_palindrome()
if __name__ == '__main__':
main()
%run -i test_palindrome.py
###Output
Test: Empty list
Test: Single element list
Test: Two element list, not a palindrome
Test: General case: Palindrome with even length
Test: General case: Palindrome with odd length
Success: test_palindrome
|
notebooks/etk_tutorial.ipynb | ###Markdown
This tutorial shows you how to use ETK to extract information for all soccer teams in Italy. Suppose that you want to construct a list of records containing team name, home city, latitude and longitude for every team in Italy. We start with a Wikipedia page that lists all soccer teams in Italy:https://en.wikipedia.org/wiki/List_of_football_clubs_in_Italy. The page has a table for each division. Each table contains the team name and home city, as well as other information that we will ignore for now. The tables don’t contain the latitude and longitude of the cities. You will notice that most city names in the table are links to other wikipedia pages, and we could get the latitude and longitudes from there. In this tutorial we will use a different approach, linking the city names to geonames.org, a dataset containing every city in the world. Part 1: Extracting The Team Tables Look at the page, and you will notice that the teams are scattered over multiple tables, one for each division. Fortunately, all the tables have the same structure, which will make our job easier. Defining an ETK module An ETK module organizes the code for a project so that you can put all the extraction code for a project in a reusable module. Often, large projects will consist of multiple ETK modules for different kinds of documents. In this tutorial we will have only one module First, we need to load some dependencies we need to cover through this tutorial. Besides, we create an instance `etk` which we'll also use through the whole process.
###Code
import requests
import json
import jsonpath_ng.ext as jex
import re
import sys
sys.path.append('../')
from etk.extractors.table_extractor import TableExtractor
from etk.extractors.glossary_extractor import GlossaryExtractor
from etk.etk import ETK
from etk.knowledge_graph_schema import KGSchema
kg_schema = KGSchema(json.load(open('./resources/master_config.json')))
etk = ETK(kg_schema=kg_schema)
etk.parser = jex.parse
###Output
_____no_output_____
###Markdown
Reading the HTML file We read the url of soccer teams, get the body of response. We also create a `cdr`. It contains `raw_content` and `url` field. At the second part of this tutorial, we'll use it.
###Code
url = 'https://en.wikipedia.org/wiki/List_of_football_clubs_in_Italy'
html_page = open('./resources/italy_teams.html', mode='r', encoding='utf-8').read()
cdr = {
'raw_content': html_page,
'url': url,
'dataset': 'italy_team'
}
print('The first 600 chars of the html page:\n')
print(html_page[:600])
###Output
The first 600 chars of the html page:
<!DOCTYPE html>
<html class="client-nojs" lang="en" dir="ltr">
<head>
<meta charset="UTF-8"/>
<title>List of football clubs in Italy - Wikipedia</title>
<script>document.documentElement.className = document.documentElement.className.replace( /(^|\s)client-nojs(\s|$)/, "$1client-js$2" );</script>
<script>(window.RLQ=window.RLQ||[]).push(function(){mw.config.set({"wgCanonicalNamespace":"","wgCanonicalSpecialPageName":false,"wgNamespaceNumber":0,"wgPageName":"List_of_football_clubs_in_Italy","wgTitle":"List of football clubs in Italy","wgCurRevisionId":859334329,"wgRevisionId":859334329,"wgArticl
###Markdown
Extracting the tables Extracting the tables in a Web page is very easy as ETK has a table extractor. We devide this phase into two parts. The first part is to create an instance of TableExtractor, and use that instance to extract the raw tables.
###Code
my_table_extractor = TableExtractor()
tables_in_page = my_table_extractor.extract(html_page)[:14]
print('Number of tables in this page:', len(tables_in_page), '\n')
print('The first table in the page shows below: \n')
print(json.dumps(tables_in_page[0].value, indent=2))
###Output
Number of tables in this page: 14
The first table in the page shows below:
{
"features": {
"no_of_rows": 21,
"no_of_cells": 105,
"max_cols_in_a_row": 5,
"ratio_of_img_tags_to_cells": 0.0,
"ratio_of_href_tags_to_cells": 0.7238095238095238,
"ratio_of_input_tags_to_cells": 0.0,
"ratio_of_select_tags_to_cells": 0.0,
"ratio_of_colspan_tags_to_cells": 0.0,
"ratio_of_colons_to_cells": 0.0,
"avg_cell_len": 14.942857142857143,
"avg_row_len": 78.71428571428571,
"avg_row_len_dev": 8.490409488646232,
"avg_col_len": 313.8,
"avg_col_len_dev": 3.8774340214067022,
"no_of_cols_containing_num": 2,
"no_of_cols_empty": 0
},
"rows": [
{
"cells": [
{
"cell": "<th>Team\n</th>",
"text": "Team",
"id": "row_0_col_0"
},
{
"cell": "<th>Home city\n</th>",
"text": "Home city",
"id": "row_0_col_1"
},
{
"cell": "<th>Stadium\n</th>",
"text": "Stadium",
"id": "row_0_col_2"
},
{
"cell": "<th>Capacity\n</th>",
"text": "Capacity",
"id": "row_0_col_3"
},
{
"cell": "<th>2017\u201318 season\n</th>",
"text": "2017\u201318 season",
"id": "row_0_col_4"
}
],
"text": "Team | Home city | Stadium | Capacity | 2017\u201318 season",
"html": "<html><body><table><th>Team\n</th>\n<th>Home city\n</th>\n<th>Stadium\n</th>\n<th>Capacity\n</th>\n<th>2017\u201318 season\n</th>\n</table></body></html>",
"id": "row_0"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Atalanta_B.C.\" title=\"Atalanta B.C.\">Atalanta</a>\n</td>",
"text": "Atalanta",
"id": "row_1_col_0"
},
{
"cell": "<td><a href=\"/wiki/Bergamo\" title=\"Bergamo\">Bergamo</a>\n</td>",
"text": "Bergamo",
"id": "row_1_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Atleti_Azzurri_d%27Italia\" title=\"Stadio Atleti Azzurri d'Italia\">Stadio Atleti Azzurri d'Italia</a>\n</td>",
"text": "Stadio Atleti Azzurri d'Italia",
"id": "row_1_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004213000000000000\u2660</span>21,300\n</td>",
"text": "7004213000000000000\u2660 21,300",
"id": "row_1_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">7th in Serie A</a>\n</td>",
"text": "7th in Serie A",
"id": "row_1_col_4"
}
],
"text": "Atalanta | Bergamo | Stadio Atleti Azzurri d'Italia | 7004213000000000000\u2660 21,300 | 7th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/Atalanta_B.C.\" title=\"Atalanta B.C.\">Atalanta</a>\n</td>\n<td><a href=\"/wiki/Bergamo\" title=\"Bergamo\">Bergamo</a>\n</td>\n<td><a href=\"/wiki/Stadio_Atleti_Azzurri_d%27Italia\" title=\"Stadio Atleti Azzurri d'Italia\">Stadio Atleti Azzurri d'Italia</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004213000000000000\u2660</span>21,300\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">7th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_1"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Bologna_F.C._1909\" title=\"Bologna F.C. 1909\">Bologna</a>\n</td>",
"text": "Bologna",
"id": "row_2_col_0"
},
{
"cell": "<td><a href=\"/wiki/Bologna\" title=\"Bologna\">Bologna</a>\n</td>",
"text": "Bologna",
"id": "row_2_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Renato_Dall%27Ara\" title=\"Stadio Renato Dall'Ara\">Stadio Renato Dall'Ara</a>\n</td>",
"text": "Stadio Renato Dall'Ara",
"id": "row_2_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004382790000000000\u2660</span>38,279\n</td>",
"text": "7004382790000000000\u2660 38,279",
"id": "row_2_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">15th in Serie A</a>\n</td>",
"text": "15th in Serie A",
"id": "row_2_col_4"
}
],
"text": "Bologna | Bologna | Stadio Renato Dall'Ara | 7004382790000000000\u2660 38,279 | 15th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/Bologna_F.C._1909\" title=\"Bologna F.C. 1909\">Bologna</a>\n</td>\n<td><a href=\"/wiki/Bologna\" title=\"Bologna\">Bologna</a>\n</td>\n<td><a href=\"/wiki/Stadio_Renato_Dall%27Ara\" title=\"Stadio Renato Dall'Ara\">Stadio Renato Dall'Ara</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004382790000000000\u2660</span>38,279\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">15th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_2"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Cagliari_Calcio\" title=\"Cagliari Calcio\">Cagliari</a>\n</td>",
"text": "Cagliari",
"id": "row_3_col_0"
},
{
"cell": "<td><a href=\"/wiki/Cagliari\" title=\"Cagliari\">Cagliari</a>\n</td>",
"text": "Cagliari",
"id": "row_3_col_1"
},
{
"cell": "<td><a href=\"/wiki/Sardegna_Arena\" title=\"Sardegna Arena\">Sardegna Arena</a>\n</td>",
"text": "Sardegna Arena",
"id": "row_3_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004162330000000000\u2660</span>16,233\n</td>",
"text": "7004162330000000000\u2660 16,233",
"id": "row_3_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">16th in Serie A</a>\n</td>",
"text": "16th in Serie A",
"id": "row_3_col_4"
}
],
"text": "Cagliari | Cagliari | Sardegna Arena | 7004162330000000000\u2660 16,233 | 16th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/Cagliari_Calcio\" title=\"Cagliari Calcio\">Cagliari</a>\n</td>\n<td><a href=\"/wiki/Cagliari\" title=\"Cagliari\">Cagliari</a>\n</td>\n<td><a href=\"/wiki/Sardegna_Arena\" title=\"Sardegna Arena\">Sardegna Arena</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004162330000000000\u2660</span>16,233\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">16th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_3"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/A.C._ChievoVerona\" title=\"A.C. ChievoVerona\">Chievo</a>\n</td>",
"text": "Chievo",
"id": "row_4_col_0"
},
{
"cell": "<td><a href=\"/wiki/Verona\" title=\"Verona\">Verona</a>\n</td>",
"text": "Verona",
"id": "row_4_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Marc%27Antonio_Bentegodi\" title=\"Stadio Marc'Antonio Bentegodi\">Stadio Marc'Antonio Bentegodi</a>\n</td>",
"text": "Stadio Marc'Antonio Bentegodi",
"id": "row_4_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004384020000000000\u2660</span>38,402\n</td>",
"text": "7004384020000000000\u2660 38,402",
"id": "row_4_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">13th in Serie A</a>\n</td>",
"text": "13th in Serie A",
"id": "row_4_col_4"
}
],
"text": "Chievo | Verona | Stadio Marc'Antonio Bentegodi | 7004384020000000000\u2660 38,402 | 13th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/A.C._ChievoVerona\" title=\"A.C. ChievoVerona\">Chievo</a>\n</td>\n<td><a href=\"/wiki/Verona\" title=\"Verona\">Verona</a>\n</td>\n<td><a href=\"/wiki/Stadio_Marc%27Antonio_Bentegodi\" title=\"Stadio Marc'Antonio Bentegodi\">Stadio Marc'Antonio Bentegodi</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004384020000000000\u2660</span>38,402\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">13th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_4"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Empoli_F.C.\" title=\"Empoli F.C.\">Empoli</a>\n</td>",
"text": "Empoli",
"id": "row_5_col_0"
},
{
"cell": "<td><a href=\"/wiki/Empoli\" title=\"Empoli\">Empoli</a>\n</td>",
"text": "Empoli",
"id": "row_5_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Carlo_Castellani\" title=\"Stadio Carlo Castellani\">Stadio Carlo Castellani</a>\n</td>",
"text": "Stadio Carlo Castellani",
"id": "row_5_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004162840000000000\u2660</span>16,284\n</td>",
"text": "7004162840000000000\u2660 16,284",
"id": "row_5_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_B\" title=\"2017\u201318 Serie B\">Serie B Champions</a>\n</td>",
"text": "Serie B Champions",
"id": "row_5_col_4"
}
],
"text": "Empoli | Empoli | Stadio Carlo Castellani | 7004162840000000000\u2660 16,284 | Serie B Champions",
"html": "<html><body><table><td><a href=\"/wiki/Empoli_F.C.\" title=\"Empoli F.C.\">Empoli</a>\n</td>\n<td><a href=\"/wiki/Empoli\" title=\"Empoli\">Empoli</a>\n</td>\n<td><a href=\"/wiki/Stadio_Carlo_Castellani\" title=\"Stadio Carlo Castellani\">Stadio Carlo Castellani</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004162840000000000\u2660</span>16,284\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_B\" title=\"2017\u201318 Serie B\">Serie B Champions</a>\n</td>\n</table></body></html>",
"id": "row_5"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/ACF_Fiorentina\" title=\"ACF Fiorentina\">Fiorentina</a>\n</td>",
"text": "Fiorentina",
"id": "row_6_col_0"
},
{
"cell": "<td><a href=\"/wiki/Florence\" title=\"Florence\">Florence</a>\n</td>",
"text": "Florence",
"id": "row_6_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Artemio_Franchi\" title=\"Stadio Artemio Franchi\">Stadio Artemio Franchi</a>\n</td>",
"text": "Stadio Artemio Franchi",
"id": "row_6_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004431470000000000\u2660</span>43,147\n</td>",
"text": "7004431470000000000\u2660 43,147",
"id": "row_6_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">8th in Serie A</a>\n</td>",
"text": "8th in Serie A",
"id": "row_6_col_4"
}
],
"text": "Fiorentina | Florence | Stadio Artemio Franchi | 7004431470000000000\u2660 43,147 | 8th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/ACF_Fiorentina\" title=\"ACF Fiorentina\">Fiorentina</a>\n</td>\n<td><a href=\"/wiki/Florence\" title=\"Florence\">Florence</a>\n</td>\n<td><a href=\"/wiki/Stadio_Artemio_Franchi\" title=\"Stadio Artemio Franchi\">Stadio Artemio Franchi</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004431470000000000\u2660</span>43,147\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">8th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_6"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Frosinone_Calcio\" title=\"Frosinone Calcio\">Frosinone</a>\n</td>",
"text": "Frosinone",
"id": "row_7_col_0"
},
{
"cell": "<td><a href=\"/wiki/Frosinone\" title=\"Frosinone\">Frosinone</a>\n</td>",
"text": "Frosinone",
"id": "row_7_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Benito_Stirpe\" title=\"Stadio Benito Stirpe\">Stadio Benito Stirpe</a>\n</td>",
"text": "Stadio Benito Stirpe",
"id": "row_7_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004162270000000000\u2660</span>16,227\n</td>",
"text": "7004162270000000000\u2660 16,227",
"id": "row_7_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_B\" title=\"2017\u201318 Serie B\">Serie B Playoff winner</a>\n</td>",
"text": "Serie B Playoff winner",
"id": "row_7_col_4"
}
],
"text": "Frosinone | Frosinone | Stadio Benito Stirpe | 7004162270000000000\u2660 16,227 | Serie B Playoff winner",
"html": "<html><body><table><td><a href=\"/wiki/Frosinone_Calcio\" title=\"Frosinone Calcio\">Frosinone</a>\n</td>\n<td><a href=\"/wiki/Frosinone\" title=\"Frosinone\">Frosinone</a>\n</td>\n<td><a href=\"/wiki/Stadio_Benito_Stirpe\" title=\"Stadio Benito Stirpe\">Stadio Benito Stirpe</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004162270000000000\u2660</span>16,227\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_B\" title=\"2017\u201318 Serie B\">Serie B Playoff winner</a>\n</td>\n</table></body></html>",
"id": "row_7"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Genoa_C.F.C.\" title=\"Genoa C.F.C.\">Genoa</a>\n</td>",
"text": "Genoa",
"id": "row_8_col_0"
},
{
"cell": "<td><a href=\"/wiki/Genoa\" title=\"Genoa\">Genoa</a>\n</td>",
"text": "Genoa",
"id": "row_8_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Luigi_Ferraris\" title=\"Stadio Luigi Ferraris\">Stadio Luigi Ferraris</a>\n</td>",
"text": "Stadio Luigi Ferraris",
"id": "row_8_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004366850000000000\u2660</span>36,685\n</td>",
"text": "7004366850000000000\u2660 36,685",
"id": "row_8_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">12th in Serie A</a>\n</td>",
"text": "12th in Serie A",
"id": "row_8_col_4"
}
],
"text": "Genoa | Genoa | Stadio Luigi Ferraris | 7004366850000000000\u2660 36,685 | 12th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/Genoa_C.F.C.\" title=\"Genoa C.F.C.\">Genoa</a>\n</td>\n<td><a href=\"/wiki/Genoa\" title=\"Genoa\">Genoa</a>\n</td>\n<td><a href=\"/wiki/Stadio_Luigi_Ferraris\" title=\"Stadio Luigi Ferraris\">Stadio Luigi Ferraris</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004366850000000000\u2660</span>36,685\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">12th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_8"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Inter_Milan\" title=\"Inter Milan\">Internazionale</a>\n</td>",
"text": "Internazionale",
"id": "row_9_col_0"
},
{
"cell": "<td><a href=\"/wiki/Milan\" title=\"Milan\">Milan</a>\n</td>",
"text": "Milan",
"id": "row_9_col_1"
},
{
"cell": "<td><a href=\"/wiki/San_Siro\" title=\"San Siro\">San Siro</a>\n</td>",
"text": "San Siro",
"id": "row_9_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004800180000000000\u2660</span>80,018\n</td>",
"text": "7004800180000000000\u2660 80,018",
"id": "row_9_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">4th in Serie A</a>\n</td>",
"text": "4th in Serie A",
"id": "row_9_col_4"
}
],
"text": "Internazionale | Milan | San Siro | 7004800180000000000\u2660 80,018 | 4th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/Inter_Milan\" title=\"Inter Milan\">Internazionale</a>\n</td>\n<td><a href=\"/wiki/Milan\" title=\"Milan\">Milan</a>\n</td>\n<td><a href=\"/wiki/San_Siro\" title=\"San Siro\">San Siro</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004800180000000000\u2660</span>80,018\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">4th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_9"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Juventus_F.C.\" title=\"Juventus F.C.\">Juventus</a>\n</td>",
"text": "Juventus",
"id": "row_10_col_0"
},
{
"cell": "<td><a href=\"/wiki/Turin\" title=\"Turin\">Turin</a>\n</td>",
"text": "Turin",
"id": "row_10_col_1"
},
{
"cell": "<td><a href=\"/wiki/Juventus_Stadium\" title=\"Juventus Stadium\">Juventus Stadium</a>\n</td>",
"text": "Juventus Stadium",
"id": "row_10_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004415070000000000\u2660</span>41,507\n</td>",
"text": "7004415070000000000\u2660 41,507",
"id": "row_10_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">Serie A Champions</a>\n</td>",
"text": "Serie A Champions",
"id": "row_10_col_4"
}
],
"text": "Juventus | Turin | Juventus Stadium | 7004415070000000000\u2660 41,507 | Serie A Champions",
"html": "<html><body><table><td><a href=\"/wiki/Juventus_F.C.\" title=\"Juventus F.C.\">Juventus</a>\n</td>\n<td><a href=\"/wiki/Turin\" title=\"Turin\">Turin</a>\n</td>\n<td><a href=\"/wiki/Juventus_Stadium\" title=\"Juventus Stadium\">Juventus Stadium</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004415070000000000\u2660</span>41,507\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">Serie A Champions</a>\n</td>\n</table></body></html>",
"id": "row_10"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/S.S._Lazio\" title=\"S.S. Lazio\">Lazio</a>\n</td>",
"text": "Lazio",
"id": "row_11_col_0"
},
{
"cell": "<td><a href=\"/wiki/Rome\" title=\"Rome\">Rome</a>\n</td>",
"text": "Rome",
"id": "row_11_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Olimpico\" title=\"Stadio Olimpico\">Stadio Olimpico</a>\n</td>",
"text": "Stadio Olimpico",
"id": "row_11_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004726980000000000\u2660</span>72,698\n</td>",
"text": "7004726980000000000\u2660 72,698",
"id": "row_11_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">5th in Serie A</a>\n</td>",
"text": "5th in Serie A",
"id": "row_11_col_4"
}
],
"text": "Lazio | Rome | Stadio Olimpico | 7004726980000000000\u2660 72,698 | 5th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/S.S._Lazio\" title=\"S.S. Lazio\">Lazio</a>\n</td>\n<td><a href=\"/wiki/Rome\" title=\"Rome\">Rome</a>\n</td>\n<td><a href=\"/wiki/Stadio_Olimpico\" title=\"Stadio Olimpico\">Stadio Olimpico</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004726980000000000\u2660</span>72,698\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">5th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_11"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/A.C._Milan\" title=\"A.C. Milan\">Milan</a>\n</td>",
"text": "Milan",
"id": "row_12_col_0"
},
{
"cell": "<td>Milan\n</td>",
"text": "Milan",
"id": "row_12_col_1"
},
{
"cell": "<td>San Siro\n</td>",
"text": "San Siro",
"id": "row_12_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004800180000000000\u2660</span>80,018\n</td>",
"text": "7004800180000000000\u2660 80,018",
"id": "row_12_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">6th in Serie A</a>\n</td>",
"text": "6th in Serie A",
"id": "row_12_col_4"
}
],
"text": "Milan | Milan | San Siro | 7004800180000000000\u2660 80,018 | 6th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/A.C._Milan\" title=\"A.C. Milan\">Milan</a>\n</td>\n<td>Milan\n</td>\n<td>San Siro\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004800180000000000\u2660</span>80,018\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">6th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_12"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/S.S.C._Napoli\" title=\"S.S.C. Napoli\">Napoli</a>\n</td>",
"text": "Napoli",
"id": "row_13_col_0"
},
{
"cell": "<td><a href=\"/wiki/Naples\" title=\"Naples\">Naples</a>\n</td>",
"text": "Naples",
"id": "row_13_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_San_Paolo\" title=\"Stadio San Paolo\">Stadio San Paolo</a>\n</td>",
"text": "Stadio San Paolo",
"id": "row_13_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004602400000000000\u2660</span>60,240\n</td>",
"text": "7004602400000000000\u2660 60,240",
"id": "row_13_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">2nd in Serie A</a>\n</td>",
"text": "2nd in Serie A",
"id": "row_13_col_4"
}
],
"text": "Napoli | Naples | Stadio San Paolo | 7004602400000000000\u2660 60,240 | 2nd in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/S.S.C._Napoli\" title=\"S.S.C. Napoli\">Napoli</a>\n</td>\n<td><a href=\"/wiki/Naples\" title=\"Naples\">Naples</a>\n</td>\n<td><a href=\"/wiki/Stadio_San_Paolo\" title=\"Stadio San Paolo\">Stadio San Paolo</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004602400000000000\u2660</span>60,240\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">2nd in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_13"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Parma_Calcio_1913\" title=\"Parma Calcio 1913\">Parma</a>\n</td>",
"text": "Parma",
"id": "row_14_col_0"
},
{
"cell": "<td><a href=\"/wiki/Parma\" title=\"Parma\">Parma</a>\n</td>",
"text": "Parma",
"id": "row_14_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Ennio_Tardini\" title=\"Stadio Ennio Tardini\">Stadio Ennio Tardini</a>\n</td>",
"text": "Stadio Ennio Tardini",
"id": "row_14_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004279060000000000\u2660</span>27,906\n</td>",
"text": "7004279060000000000\u2660 27,906",
"id": "row_14_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_B\" title=\"2017\u201318 Serie B\">2nd in Serie B</a>\n</td>",
"text": "2nd in Serie B",
"id": "row_14_col_4"
}
],
"text": "Parma | Parma | Stadio Ennio Tardini | 7004279060000000000\u2660 27,906 | 2nd in Serie B",
"html": "<html><body><table><td><a href=\"/wiki/Parma_Calcio_1913\" title=\"Parma Calcio 1913\">Parma</a>\n</td>\n<td><a href=\"/wiki/Parma\" title=\"Parma\">Parma</a>\n</td>\n<td><a href=\"/wiki/Stadio_Ennio_Tardini\" title=\"Stadio Ennio Tardini\">Stadio Ennio Tardini</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004279060000000000\u2660</span>27,906\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_B\" title=\"2017\u201318 Serie B\">2nd in Serie B</a>\n</td>\n</table></body></html>",
"id": "row_14"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/A.S._Roma\" title=\"A.S. Roma\">Roma</a>\n</td>",
"text": "Roma",
"id": "row_15_col_0"
},
{
"cell": "<td>Rome\n</td>",
"text": "Rome",
"id": "row_15_col_1"
},
{
"cell": "<td>Stadio Olimpico\n</td>",
"text": "Stadio Olimpico",
"id": "row_15_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004726980000000000\u2660</span>72,698\n</td>",
"text": "7004726980000000000\u2660 72,698",
"id": "row_15_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">3rd in Serie A</a>\n</td>",
"text": "3rd in Serie A",
"id": "row_15_col_4"
}
],
"text": "Roma | Rome | Stadio Olimpico | 7004726980000000000\u2660 72,698 | 3rd in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/A.S._Roma\" title=\"A.S. Roma\">Roma</a>\n</td>\n<td>Rome\n</td>\n<td>Stadio Olimpico\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004726980000000000\u2660</span>72,698\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">3rd in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_15"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/U.C._Sampdoria\" title=\"U.C. Sampdoria\">Sampdoria</a>\n</td>",
"text": "Sampdoria",
"id": "row_16_col_0"
},
{
"cell": "<td><a href=\"/wiki/Genoa\" title=\"Genoa\">Genoa</a>\n</td>",
"text": "Genoa",
"id": "row_16_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Luigi_Ferraris\" title=\"Stadio Luigi Ferraris\">Stadio Luigi Ferraris</a>\n</td>",
"text": "Stadio Luigi Ferraris",
"id": "row_16_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004366850000000000\u2660</span>36,685\n</td>",
"text": "7004366850000000000\u2660 36,685",
"id": "row_16_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">10th in Serie A</a>\n</td>",
"text": "10th in Serie A",
"id": "row_16_col_4"
}
],
"text": "Sampdoria | Genoa | Stadio Luigi Ferraris | 7004366850000000000\u2660 36,685 | 10th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/U.C._Sampdoria\" title=\"U.C. Sampdoria\">Sampdoria</a>\n</td>\n<td><a href=\"/wiki/Genoa\" title=\"Genoa\">Genoa</a>\n</td>\n<td><a href=\"/wiki/Stadio_Luigi_Ferraris\" title=\"Stadio Luigi Ferraris\">Stadio Luigi Ferraris</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004366850000000000\u2660</span>36,685\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">10th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_16"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/U.S._Sassuolo_Calcio\" title=\"U.S. Sassuolo Calcio\">Sassuolo</a>\n</td>",
"text": "Sassuolo",
"id": "row_17_col_0"
},
{
"cell": "<td><a href=\"/wiki/Sassuolo\" title=\"Sassuolo\">Sassuolo</a>\n</td>",
"text": "Sassuolo",
"id": "row_17_col_1"
},
{
"cell": "<td><a href=\"/wiki/Mapei_Stadium_%E2%80%93_Citt%C3%A0_del_Tricolore\" title=\"Mapei Stadium \u2013 Citt\u00e0 del Tricolore\">Mapei Stadium \u2013 Citt\u00e0 del Tricolore</a><br/><small>(<a href=\"/wiki/Reggio_Emilia\" title=\"Reggio Emilia\">Reggio Emilia</a>)</small>\n</td>",
"text": "Mapei Stadium \u2013 Citt\u00e0 del Tricolore ( Reggio Emilia )",
"id": "row_17_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004237170000000000\u2660</span>23,717\n</td>",
"text": "7004237170000000000\u2660 23,717",
"id": "row_17_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">11th in Serie A</a>\n</td>",
"text": "11th in Serie A",
"id": "row_17_col_4"
}
],
"text": "Sassuolo | Sassuolo | Mapei Stadium \u2013 Citt\u00e0 del Tricolore ( Reggio Emilia ) | 7004237170000000000\u2660 23,717 | 11th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/U.S._Sassuolo_Calcio\" title=\"U.S. Sassuolo Calcio\">Sassuolo</a>\n</td>\n<td><a href=\"/wiki/Sassuolo\" title=\"Sassuolo\">Sassuolo</a>\n</td>\n<td><a href=\"/wiki/Mapei_Stadium_%E2%80%93_Citt%C3%A0_del_Tricolore\" title=\"Mapei Stadium \u2013 Citt\u00e0 del Tricolore\">Mapei Stadium \u2013 Citt\u00e0 del Tricolore</a><br/><small>(<a href=\"/wiki/Reggio_Emilia\" title=\"Reggio Emilia\">Reggio Emilia</a>)</small>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004237170000000000\u2660</span>23,717\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">11th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_17"
},
{
"cells": [
{
"cell": "<td><a class=\"mw-redirect\" href=\"/wiki/S.P.A.L._2013\" title=\"S.P.A.L. 2013\">SPAL</a>\n</td>",
"text": "SPAL",
"id": "row_18_col_0"
},
{
"cell": "<td><a href=\"/wiki/Ferrara\" title=\"Ferrara\">Ferrara</a>\n</td>",
"text": "Ferrara",
"id": "row_18_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Paolo_Mazza\" title=\"Stadio Paolo Mazza\">Stadio Paolo Mazza</a>\n</td>",
"text": "Stadio Paolo Mazza",
"id": "row_18_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004130200000000000\u2660</span>13,020\n</td>",
"text": "7004130200000000000\u2660 13,020",
"id": "row_18_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">17th in Serie A</a>\n</td>",
"text": "17th in Serie A",
"id": "row_18_col_4"
}
],
"text": "SPAL | Ferrara | Stadio Paolo Mazza | 7004130200000000000\u2660 13,020 | 17th in Serie A",
"html": "<html><body><table><td><a class=\"mw-redirect\" href=\"/wiki/S.P.A.L._2013\" title=\"S.P.A.L. 2013\">SPAL</a>\n</td>\n<td><a href=\"/wiki/Ferrara\" title=\"Ferrara\">Ferrara</a>\n</td>\n<td><a href=\"/wiki/Stadio_Paolo_Mazza\" title=\"Stadio Paolo Mazza\">Stadio Paolo Mazza</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004130200000000000\u2660</span>13,020\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">17th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_18"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Torino_F.C.\" title=\"Torino F.C.\">Torino</a>\n</td>",
"text": "Torino",
"id": "row_19_col_0"
},
{
"cell": "<td>Turin\n</td>",
"text": "Turin",
"id": "row_19_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Olimpico_Grande_Torino\" title=\"Stadio Olimpico Grande Torino\">Stadio Olimpico Grande Torino</a>\n</td>",
"text": "Stadio Olimpico Grande Torino",
"id": "row_19_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004279940000000000\u2660</span>27,994\n</td>",
"text": "7004279940000000000\u2660 27,994",
"id": "row_19_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">9th in Serie A</a>\n</td>",
"text": "9th in Serie A",
"id": "row_19_col_4"
}
],
"text": "Torino | Turin | Stadio Olimpico Grande Torino | 7004279940000000000\u2660 27,994 | 9th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/Torino_F.C.\" title=\"Torino F.C.\">Torino</a>\n</td>\n<td>Turin\n</td>\n<td><a href=\"/wiki/Stadio_Olimpico_Grande_Torino\" title=\"Stadio Olimpico Grande Torino\">Stadio Olimpico Grande Torino</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004279940000000000\u2660</span>27,994\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">9th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_19"
},
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Udinese_Calcio\" title=\"Udinese Calcio\">Udinese</a>\n</td>",
"text": "Udinese",
"id": "row_20_col_0"
},
{
"cell": "<td><a href=\"/wiki/Udine\" title=\"Udine\">Udine</a>\n</td>",
"text": "Udine",
"id": "row_20_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Friuli\" title=\"Stadio Friuli\">Stadio Friuli-Dacia Arena</a>\n</td>",
"text": "Stadio Friuli-Dacia Arena",
"id": "row_20_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004251320000000000\u2660</span>25,132\n</td>",
"text": "7004251320000000000\u2660 25,132",
"id": "row_20_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">14th in Serie A</a>\n</td>",
"text": "14th in Serie A",
"id": "row_20_col_4"
}
],
"text": "Udinese | Udine | Stadio Friuli-Dacia Arena | 7004251320000000000\u2660 25,132 | 14th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/Udinese_Calcio\" title=\"Udinese Calcio\">Udinese</a>\n</td>\n<td><a href=\"/wiki/Udine\" title=\"Udine\">Udine</a>\n</td>\n<td><a href=\"/wiki/Stadio_Friuli\" title=\"Stadio Friuli\">Stadio Friuli-Dacia Arena</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004251320000000000\u2660</span>25,132\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">14th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_20"
}
],
"context_before": ". List of Italian Football Championship clubs clubs. For a complete list of clubs see 2018\u201319 Serie A",
"context_after": "Team Home city Stadium Capacity",
"fingerprint": "-018-020-10th-11th-12th-13-132-13th-147-14th-15th-16-16th-17th-18-1909-1913-2013-2017-21-227-23-233-240-25-27-279-27Antonio_Bentegodi-27Ara-27Italia-284-2nd-300-36-38-3rd-402-41-43-4th-507-5th-60-685-698-6th-7004130200000000000-7004162270000000000-7004162330000000000-7004162840000000000-7004213000000000000-7004237170000000000-7004251320000000000-7004279060000000000-7004279940000000000-7004366850000000000-7004382790000000000-7004384020000000000-7004415070000000000-7004431470000000000-7004602400000000000-7004726980000000000-7004800180000000000-717-72-7th-80-8th-906-9318_Serie_A-9318_Serie_B-93_Citt-994-9th-A-A0_del_Tricolore-ACF-ACF_Fiorentina-Antonio-Ara-Arena-Artemio-Atalanta-Atalanta_B-Atleti-Azzurri-B-Benito-Bentegodi-Bergamo-Bologna-Bologna_F-C-C3-Cagliari-Cagliari_Calcio-Calcio-Capacity-Carlo-Castellani-Champions-Chievo-ChievoVerona-Citt\u00e0-Dacia-Dall-E2-Emilia-Empoli-Empoli_F-Ennio-F-Ferrara-Ferraris-Fiorentina-Florence-Franchi-Friuli-Frosinone-Frosinone_Calcio-Genoa-Genoa_C-Grande-Home-Inter-Inter_Milan-Internazionale-Italia-Juventus-Juventus_F-Juventus_Stadium-L-Lazio-Luigi-Mapei-Mapei_Stadium_-Marc-Mazza-Milan-Naples-Napoli-Olimpico-P-Paolo-Parma-Parma_Calcio_1913-Playoff-Reggio-Reggio_Emilia-Renato-Roma-Rome-S-SPAL-Sampdoria-San-San_Siro-Sardegna-Sardegna_Arena-Sassuolo-Serie-Siro-Stadio-Stadio_Artemio_Franchi-Stadio_Atleti_Azzurri_d-Stadio_Benito_Stirpe-Stadio_Carlo_Castellani-Stadio_Ennio_Tardini-Stadio_Friuli-Stadio_Luigi_Ferraris-Stadio_Marc-Stadio_Olimpico-Stadio_Olimpico_Grande_Torino-Stadio_Paolo_Mazza-Stadio_Renato_Dall-Stadio_San_Paolo-Stadium-Stirpe-Tardini-Team-Torino-Torino_F-Tricolore-Turin-U-Udine-Udinese-Udinese_Calcio-Verona-_1909-_2013-_ChievoVerona-_Lazio-_Milan-_Napoli-_Roma-_Sampdoria-_Sassuolo_Calcio-a-align-br-center-city-class-d-del-display-href-in-mw-none-redirect-season-small-sortkey-span-style-table-td-text-th-title-tr-wiki-winner",
"html": "<table><tr><th>Team\n</th><th>Home city\n</th><th>Stadium\n</th><th>Capacity\n</th><th>2017\u201318 season\n</th></tr><tr><td><a href=\"/wiki/Atalanta_B.C.\" title=\"Atalanta B.C.\">Atalanta</a>\n</td><td><a href=\"/wiki/Bergamo\" title=\"Bergamo\">Bergamo</a>\n</td><td><a href=\"/wiki/Stadio_Atleti_Azzurri_d%27Italia\" title=\"Stadio Atleti Azzurri d'Italia\">Stadio Atleti Azzurri d'Italia</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004213000000000000\u2660</span>21,300\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">7th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/Bologna_F.C._1909\" title=\"Bologna F.C. 1909\">Bologna</a>\n</td><td><a href=\"/wiki/Bologna\" title=\"Bologna\">Bologna</a>\n</td><td><a href=\"/wiki/Stadio_Renato_Dall%27Ara\" title=\"Stadio Renato Dall'Ara\">Stadio Renato Dall'Ara</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004382790000000000\u2660</span>38,279\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">15th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/Cagliari_Calcio\" title=\"Cagliari Calcio\">Cagliari</a>\n</td><td><a href=\"/wiki/Cagliari\" title=\"Cagliari\">Cagliari</a>\n</td><td><a href=\"/wiki/Sardegna_Arena\" title=\"Sardegna Arena\">Sardegna Arena</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004162330000000000\u2660</span>16,233\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">16th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/A.C._ChievoVerona\" title=\"A.C. ChievoVerona\">Chievo</a>\n</td><td><a href=\"/wiki/Verona\" title=\"Verona\">Verona</a>\n</td><td><a href=\"/wiki/Stadio_Marc%27Antonio_Bentegodi\" title=\"Stadio Marc'Antonio Bentegodi\">Stadio Marc'Antonio Bentegodi</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004384020000000000\u2660</span>38,402\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">13th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/Empoli_F.C.\" title=\"Empoli F.C.\">Empoli</a>\n</td><td><a href=\"/wiki/Empoli\" title=\"Empoli\">Empoli</a>\n</td><td><a href=\"/wiki/Stadio_Carlo_Castellani\" title=\"Stadio Carlo Castellani\">Stadio Carlo Castellani</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004162840000000000\u2660</span>16,284\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_B\" title=\"2017\u201318 Serie B\">Serie B Champions</a>\n</td></tr><tr><td><a href=\"/wiki/ACF_Fiorentina\" title=\"ACF Fiorentina\">Fiorentina</a>\n</td><td><a href=\"/wiki/Florence\" title=\"Florence\">Florence</a>\n</td><td><a href=\"/wiki/Stadio_Artemio_Franchi\" title=\"Stadio Artemio Franchi\">Stadio Artemio Franchi</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004431470000000000\u2660</span>43,147\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">8th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/Frosinone_Calcio\" title=\"Frosinone Calcio\">Frosinone</a>\n</td><td><a href=\"/wiki/Frosinone\" title=\"Frosinone\">Frosinone</a>\n</td><td><a href=\"/wiki/Stadio_Benito_Stirpe\" title=\"Stadio Benito Stirpe\">Stadio Benito Stirpe</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004162270000000000\u2660</span>16,227\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_B\" title=\"2017\u201318 Serie B\">Serie B Playoff winner</a>\n</td></tr><tr><td><a href=\"/wiki/Genoa_C.F.C.\" title=\"Genoa C.F.C.\">Genoa</a>\n</td><td><a href=\"/wiki/Genoa\" title=\"Genoa\">Genoa</a>\n</td><td><a href=\"/wiki/Stadio_Luigi_Ferraris\" title=\"Stadio Luigi Ferraris\">Stadio Luigi Ferraris</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004366850000000000\u2660</span>36,685\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">12th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/Inter_Milan\" title=\"Inter Milan\">Internazionale</a>\n</td><td><a href=\"/wiki/Milan\" title=\"Milan\">Milan</a>\n</td><td><a href=\"/wiki/San_Siro\" title=\"San Siro\">San Siro</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004800180000000000\u2660</span>80,018\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">4th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/Juventus_F.C.\" title=\"Juventus F.C.\">Juventus</a>\n</td><td><a href=\"/wiki/Turin\" title=\"Turin\">Turin</a>\n</td><td><a href=\"/wiki/Juventus_Stadium\" title=\"Juventus Stadium\">Juventus Stadium</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004415070000000000\u2660</span>41,507\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">Serie A Champions</a>\n</td></tr><tr><td><a href=\"/wiki/S.S._Lazio\" title=\"S.S. Lazio\">Lazio</a>\n</td><td><a href=\"/wiki/Rome\" title=\"Rome\">Rome</a>\n</td><td><a href=\"/wiki/Stadio_Olimpico\" title=\"Stadio Olimpico\">Stadio Olimpico</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004726980000000000\u2660</span>72,698\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">5th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/A.C._Milan\" title=\"A.C. Milan\">Milan</a>\n</td><td>Milan\n</td><td>San Siro\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004800180000000000\u2660</span>80,018\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">6th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/S.S.C._Napoli\" title=\"S.S.C. Napoli\">Napoli</a>\n</td><td><a href=\"/wiki/Naples\" title=\"Naples\">Naples</a>\n</td><td><a href=\"/wiki/Stadio_San_Paolo\" title=\"Stadio San Paolo\">Stadio San Paolo</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004602400000000000\u2660</span>60,240\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">2nd in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/Parma_Calcio_1913\" title=\"Parma Calcio 1913\">Parma</a>\n</td><td><a href=\"/wiki/Parma\" title=\"Parma\">Parma</a>\n</td><td><a href=\"/wiki/Stadio_Ennio_Tardini\" title=\"Stadio Ennio Tardini\">Stadio Ennio Tardini</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004279060000000000\u2660</span>27,906\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_B\" title=\"2017\u201318 Serie B\">2nd in Serie B</a>\n</td></tr><tr><td><a href=\"/wiki/A.S._Roma\" title=\"A.S. Roma\">Roma</a>\n</td><td>Rome\n</td><td>Stadio Olimpico\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004726980000000000\u2660</span>72,698\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">3rd in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/U.C._Sampdoria\" title=\"U.C. Sampdoria\">Sampdoria</a>\n</td><td><a href=\"/wiki/Genoa\" title=\"Genoa\">Genoa</a>\n</td><td><a href=\"/wiki/Stadio_Luigi_Ferraris\" title=\"Stadio Luigi Ferraris\">Stadio Luigi Ferraris</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004366850000000000\u2660</span>36,685\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">10th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/U.S._Sassuolo_Calcio\" title=\"U.S. Sassuolo Calcio\">Sassuolo</a>\n</td><td><a href=\"/wiki/Sassuolo\" title=\"Sassuolo\">Sassuolo</a>\n</td><td><a href=\"/wiki/Mapei_Stadium_%E2%80%93_Citt%C3%A0_del_Tricolore\" title=\"Mapei Stadium \u2013 Citt\u00e0 del Tricolore\">Mapei Stadium \u2013 Citt\u00e0 del Tricolore</a><br/><small>(<a href=\"/wiki/Reggio_Emilia\" title=\"Reggio Emilia\">Reggio Emilia</a>)</small>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004237170000000000\u2660</span>23,717\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">11th in Serie A</a>\n</td></tr><tr><td><a class=\"mw-redirect\" href=\"/wiki/S.P.A.L._2013\" title=\"S.P.A.L. 2013\">SPAL</a>\n</td><td><a href=\"/wiki/Ferrara\" title=\"Ferrara\">Ferrara</a>\n</td><td><a href=\"/wiki/Stadio_Paolo_Mazza\" title=\"Stadio Paolo Mazza\">Stadio Paolo Mazza</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004130200000000000\u2660</span>13,020\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">17th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/Torino_F.C.\" title=\"Torino F.C.\">Torino</a>\n</td><td>Turin\n</td><td><a href=\"/wiki/Stadio_Olimpico_Grande_Torino\" title=\"Stadio Olimpico Grande Torino\">Stadio Olimpico Grande Torino</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004279940000000000\u2660</span>27,994\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">9th in Serie A</a>\n</td></tr><tr><td><a href=\"/wiki/Udinese_Calcio\" title=\"Udinese Calcio\">Udinese</a>\n</td><td><a href=\"/wiki/Udine\" title=\"Udine\">Udine</a>\n</td><td><a href=\"/wiki/Stadio_Friuli\" title=\"Stadio Friuli\">Stadio Friuli-Dacia Arena</a>\n</td><td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004251320000000000\u2660</span>25,132\n</td><td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">14th in Serie A</a>\n</td></tr></table>",
"text": "Team | Home city | Stadium | Capacity | 2017\u201318 season\nAtalanta | Bergamo | Stadio Atleti Azzurri d'Italia | 7004213000000000000\u2660 21,300 | 7th in Serie A\nBologna | Bologna | Stadio Renato Dall'Ara | 7004382790000000000\u2660 38,279 | 15th in Serie A\nCagliari | Cagliari | Sardegna Arena | 7004162330000000000\u2660 16,233 | 16th in Serie A\nChievo | Verona | Stadio Marc'Antonio Bentegodi | 7004384020000000000\u2660 38,402 | 13th in Serie A\nEmpoli | Empoli | Stadio Carlo Castellani | 7004162840000000000\u2660 16,284 | Serie B Champions\nFiorentina | Florence | Stadio Artemio Franchi | 7004431470000000000\u2660 43,147 | 8th in Serie A\nFrosinone | Frosinone | Stadio Benito Stirpe | 7004162270000000000\u2660 16,227 | Serie B Playoff winner\nGenoa | Genoa | Stadio Luigi Ferraris | 7004366850000000000\u2660 36,685 | 12th in Serie A\nInternazionale | Milan | San Siro | 7004800180000000000\u2660 80,018 | 4th in Serie A\nJuventus | Turin | Juventus Stadium | 7004415070000000000\u2660 41,507 | Serie A Champions\nLazio | Rome | Stadio Olimpico | 7004726980000000000\u2660 72,698 | 5th in Serie A\nMilan | Milan | San Siro | 7004800180000000000\u2660 80,018 | 6th in Serie A\nNapoli | Naples | Stadio San Paolo | 7004602400000000000\u2660 60,240 | 2nd in Serie A\nParma | Parma | Stadio Ennio Tardini | 7004279060000000000\u2660 27,906 | 2nd in Serie B\nRoma | Rome | Stadio Olimpico | 7004726980000000000\u2660 72,698 | 3rd in Serie A\nSampdoria | Genoa | Stadio Luigi Ferraris | 7004366850000000000\u2660 36,685 | 10th in Serie A\nSassuolo | Sassuolo | Mapei Stadium \u2013 Citt\u00e0 del Tricolore ( Reggio Emilia ) | 7004237170000000000\u2660 23,717 | 11th in Serie A\nSPAL | Ferrara | Stadio Paolo Mazza | 7004130200000000000\u2660 13,020 | 17th in Serie A\nTorino | Turin | Stadio Olimpico Grande Torino | 7004279940000000000\u2660 27,994 | 9th in Serie A\nUdinese | Udine | Stadio Friuli-Dacia Arena | 7004251320000000000\u2660 25,132 | 14th in Serie A\n"
}
###Markdown
In the second part, we use JSON path to do further table extraction.Aside: ETK uses JSON paths to access data in JSON documents. Take a look at the excellent and short introduction to JSON paths: http://goessner.net/articles/JsonPath/
###Code
all_json_path = '$.cells[0:4].text'
docs = list()
for table in tables_in_page:
# skipping the first row, the heading
for row in table.value['rows'][1:]:
doc = etk.create_document(row)
row_values = doc.select_segments(all_json_path)
# add the information we extracted in the knowledge graph of the doc.
doc.kg.add_value('team', value=row_values[0].value)
doc.kg.add_value('city_name', value=row_values[1].value)
doc.kg.add_value('stadium', value=row_values[2].value)
capacity_split = re.split(' |,', row_values[3].value)
if capacity_split[-1] != '':
capacity = int(capacity_split[-2] + capacity_split[-1]) if len(capacity_split) > 1 else int(
capacity_split[-1])
doc.kg.add_value('capacity', value=capacity)
docs.append(doc)
print('Number of rows extracted from that page', len(docs), '\n')
print('Sample rows(5):')
for doc in docs[:5]:
print(doc.kg.value, '\n')
###Output
Number of rows extracted from that page 258
Sample rows(5):
{'team': [{'value': 'Atalanta', 'key': 'atalanta'}], 'city_name': [{'value': 'Bergamo', 'key': 'bergamo'}], 'stadium': [{'value': "Stadio Atleti Azzurri d'Italia", 'key': "stadio atleti azzurri d'italia"}], 'capacity': [{'value': 21300, 'key': '21300'}]}
{'team': [{'value': 'Bologna', 'key': 'bologna'}], 'city_name': [{'value': 'Bologna', 'key': 'bologna'}], 'stadium': [{'value': "Stadio Renato Dall'Ara", 'key': "stadio renato dall'ara"}], 'capacity': [{'value': 38279, 'key': '38279'}]}
{'team': [{'value': 'Cagliari', 'key': 'cagliari'}], 'city_name': [{'value': 'Cagliari', 'key': 'cagliari'}], 'stadium': [{'value': 'Sardegna Arena', 'key': 'sardegna arena'}], 'capacity': [{'value': 16233, 'key': '16233'}]}
{'team': [{'value': 'Chievo', 'key': 'chievo'}], 'city_name': [{'value': 'Verona', 'key': 'verona'}], 'stadium': [{'value': "Stadio Marc'Antonio Bentegodi", 'key': "stadio marc'antonio bentegodi"}], 'capacity': [{'value': 38402, 'key': '38402'}]}
{'team': [{'value': 'Empoli', 'key': 'empoli'}], 'city_name': [{'value': 'Empoli', 'key': 'empoli'}], 'stadium': [{'value': 'Stadio Carlo Castellani', 'key': 'stadio carlo castellani'}], 'capacity': [{'value': 16284, 'key': '16284'}]}
###Markdown
The extracted tables are now stored in your JSON document. construct a dict that maps city names to all geonames records that contain the city name with population greater than 25,000.
###Code
file_name = './resources/cities_ppl_25000.json'
file = open(file_name, 'r')
city_dataset = json.loads(file.read())
file.close()
city_list = list(city_dataset.keys())
print('There are', len(city_list), 'cities with population great than or equal to 25,000.\n')
print('City list samples(20):\n')
print(city_list[:20])
###Output
There are 15117 cities with population great than or equal to 25,000.
City list samples(20):
['Marion', 'Fes', 'Fes al Bali', 'Gravina in Puglia', 'Nawada', 'Pensacola', 'Pedro Betancourt', 'Uriangato', 'Fiditi', 'Wilkes-Barre', 'Kafue', 'Chipata', 'Sawangan', 'Tuxpan de Rodriguez Cano', 'Rosny-sous-Bois', 'Caete', 'Kafr ad Dawwar', 'Reynoldsburg', 'Simferopol', 'Ouargla']
###Markdown
Identifying the city names in geonames and linking to geonames There are many ways to do this step. We will do it using the ETK glossary extractor to illustrate how to use other extractors and how to chain the results of one extractor as input to other extractors.Using data from the geonames.org web site, we prepared a list of all cities in the world with population greater than 25,000. We use this small glossary to make the code run faster, but you may want to try it with the full list of cities.First, we need to load the glossary in ETK.We're using the default tokenizer to tokenize the strings. Besides, we set `ngrams` to zero to let the program choose the best ngram number automatically.
###Code
my_glossary_extractor = GlossaryExtractor(glossary=city_list, extractor_name='tutorial_glossary',
tokenizer=etk.default_tokenizer, ngrams=3,
case_sensitive=False)
###Output
_____no_output_____
###Markdown
Now we are going to use the glossary to extract from the `Home city` column all the strings that match names in geonames. This method will allow us to extract the geonames city name from cells that may contain extraneous information.To run the glossary extractor over all cells containing `Home city` we use a JSON path that selects these cells across all tables.Our list of extractions has the names of cities that we know appear in geonames. Often, different cities in the world have the same name (e.g., Paris, France and Paris, Texas). To get the latitude and longitude, we need to identify the correct city. We know all the cities are in Italy, so we can easily filter.
###Code
hit_count = 0
for doc in docs:
city_json_path = '$.cells[1].text'
row_values = doc.select_segments(city_json_path)
# use the city field of the doc, run the GlossaryExtractor
extractions = doc.extract(my_glossary_extractor, row_values[0])
if extractions:
path = '$."' + extractions[0].value + '"[?(@.country == "Italy")]'
jsonpath_expr = jex.parse(path)
city_match = jsonpath_expr.find(city_dataset)
if city_match:
hit_count += 1
# add corresponding values of city_dataset into knowledge graph of the doc
for field in city_match[0].value:
doc.kg.add_value(field, value=city_match[0].value[field])
print('There\'re', hit_count, 'hits for city_list.\n')
print('Final result sample:\n')
print(json.dumps(docs[0].kg.value, indent=2))
###Output
There're 138 hits for city_list.
Final result sample:
{
"team": [
{
"value": "Atalanta",
"key": "atalanta"
}
],
"city_name": [
{
"value": "Bergamo",
"key": "bergamo"
}
],
"stadium": [
{
"value": "Stadio Atleti Azzurri d'Italia",
"key": "stadio atleti azzurri d'italia"
}
],
"capacity": [
{
"value": 21300,
"key": "21300"
}
],
"population": [
{
"value": 114162,
"key": "114162"
}
],
"state": [
{
"value": "Lombardia",
"key": "lombardia"
}
],
"country": [
{
"value": "Italy",
"key": "italy"
}
],
"latitude": [
{
"value": "45.69601",
"key": "45.69601"
}
],
"longitude": [
{
"value": "9.66721",
"key": "9.66721"
}
]
}
###Markdown
Part 2 ETK Module
###Code
import os
import sys
import json
import requests
import jsonpath_ng.ext as jex
import re
from etk.etk import ETK
from etk.document import Document
from etk.etk_module import ETKModule
from etk.knowledge_graph_schema import KGSchema
from etk.utilities import Utility
from etk.extractors.table_extractor import TableExtractor
from etk.extractors.glossary_extractor import GlossaryExtractor
class ItalyTeamsModule(ETKModule):
def __init__(self, etk):
ETKModule.__init__(self, etk)
self.my_table_extractor = TableExtractor()
file_name = './resources/cities_ppl_25000.json'
file = open(file_name, 'r')
self.city_dataset = json.loads(file.read())
file.close()
self.city_list = list(self.city_dataset.keys())
self.my_glossary_extractor = GlossaryExtractor(glossary=self.city_list, extractor_name='tutorial_glossary',
tokenizer=etk.default_tokenizer, ngrams=3,
case_sensitive=False)
def process_document(self, cdr_doc: Document):
new_docs = list()
doc_json = cdr_doc.cdr_document
if 'raw_content' in doc_json and doc_json['raw_content'].strip() != '':
tables_in_page = self.my_table_extractor.extract(
doc_json['raw_content'])[:14]
for table in tables_in_page:
# skipping the first row, the heading
for row in table.value['rows'][1:]:
doc = etk.create_document(row)
all_json_path = '$.cells[0:4].text'
row_values = doc.select_segments(all_json_path)
# add the information we extracted in the knowledge graph of the doc.
doc.kg.add_value('team', value=row_values[0].value)
doc.kg.add_value('city_name', value=row_values[1].value)
doc.kg.add_value('stadium', value=row_values[2].value)
capacity_split = re.split(' |,', row_values[3].value)
if capacity_split[-1] != '':
capacity = int(capacity_split[-2] + capacity_split[-1]) if len(capacity_split) > 1 else int(
capacity_split[-1])
doc.kg.add_value('capacity', value=capacity)
city_json_path = '$.cells[1].text'
row_values = doc.select_segments(city_json_path)
# use the city field of the doc, run the GlossaryExtractor
extractions = doc.extract(
self.my_glossary_extractor, row_values[0])
if extractions:
path = '$."' + \
extractions[0].value + '"[?(@.country == "Italy")]'
jsonpath_expr = jex.parse(path)
city_match = jsonpath_expr.find(self.city_dataset)
if city_match:
# add corresponding values of city_dataset into knowledge graph of the doc
for field in city_match[0].value:
doc.kg.add_value(
field, value=city_match[0].value[field])
new_docs.append(doc)
return new_docs
def document_selector(self, doc) -> bool:
return doc.cdr_document.get("dataset") == "italy_team"
if __name__ == "__main__":
url = 'https://en.wikipedia.org/wiki/List_of_football_clubs_in_Italy'
html_page = open('./resources/italy_teams.html', mode='r', encoding='utf-8').read()
cdr = {
'raw_content': html_page,
'url': url,
'dataset': 'italy_team'
}
kg_schema = KGSchema(json.load(open('./resources/master_config.json')))
etk = ETK(modules=ItalyTeamsModule, kg_schema=kg_schema)
etk.parser = jex.parse
cdr_doc = Document(etk, cdr_document=cdr, mime_type='json', url=cdr['url'])
results = etk.process_ems(cdr_doc)[1:]
print('Total docs:', len(results))
print("Sample result:\n")
print(json.dumps(results[0].value, indent=2))
###Output
Total docs: 258
Sample result:
{
"cells": [
{
"cell": "<td><a href=\"/wiki/Atalanta_B.C.\" title=\"Atalanta B.C.\">Atalanta</a>\n</td>",
"text": "Atalanta",
"id": "row_1_col_0"
},
{
"cell": "<td><a href=\"/wiki/Bergamo\" title=\"Bergamo\">Bergamo</a>\n</td>",
"text": "Bergamo",
"id": "row_1_col_1"
},
{
"cell": "<td><a href=\"/wiki/Stadio_Atleti_Azzurri_d%27Italia\" title=\"Stadio Atleti Azzurri d'Italia\">Stadio Atleti Azzurri d'Italia</a>\n</td>",
"text": "Stadio Atleti Azzurri d'Italia",
"id": "row_1_col_2"
},
{
"cell": "<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004213000000000000\u2660</span>21,300\n</td>",
"text": "7004213000000000000\u2660 21,300",
"id": "row_1_col_3"
},
{
"cell": "<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">7th in Serie A</a>\n</td>",
"text": "7th in Serie A",
"id": "row_1_col_4"
}
],
"text": "Atalanta | Bergamo | Stadio Atleti Azzurri d'Italia | 7004213000000000000\u2660 21,300 | 7th in Serie A",
"html": "<html><body><table><td><a href=\"/wiki/Atalanta_B.C.\" title=\"Atalanta B.C.\">Atalanta</a>\n</td>\n<td><a href=\"/wiki/Bergamo\" title=\"Bergamo\">Bergamo</a>\n</td>\n<td><a href=\"/wiki/Stadio_Atleti_Azzurri_d%27Italia\" title=\"Stadio Atleti Azzurri d'Italia\">Stadio Atleti Azzurri d'Italia</a>\n</td>\n<td style=\"text-align:center;\"><span class=\"sortkey\" style=\"display:none\">7004213000000000000\u2660</span>21,300\n</td>\n<td><a href=\"/wiki/2017%E2%80%9318_Serie_A\" title=\"2017\u201318 Serie A\">7th in Serie A</a>\n</td>\n</table></body></html>",
"id": "row_1",
"provenances": [
{
"@id": 0,
"@type": "kg_provenance_record",
"reference_type": "constant",
"value": "Atalanta"
},
{
"@id": 1,
"@type": "kg_provenance_record",
"reference_type": "constant",
"value": "Bergamo"
},
{
"@id": 2,
"@type": "kg_provenance_record",
"reference_type": "constant",
"value": "Stadio Atleti Azzurri d'Italia"
},
{
"@id": 3,
"@type": "kg_provenance_record",
"reference_type": "constant",
"value": "21300"
},
{
"@id": 4,
"@type": "extraction_provenance_record",
"method": "tutorial_glossary",
"confidence": 1.0,
"origin_record": {
"path": "cells.[1].text",
"start_char": 0,
"end_char": 7
}
},
{
"@id": 5,
"@type": "kg_provenance_record",
"reference_type": "constant",
"value": "114162"
},
{
"@id": 6,
"@type": "kg_provenance_record",
"reference_type": "constant",
"value": "Lombardia"
},
{
"@id": 7,
"@type": "kg_provenance_record",
"reference_type": "constant",
"value": "Italy"
},
{
"@id": 8,
"@type": "kg_provenance_record",
"reference_type": "constant",
"value": "45.69601"
},
{
"@id": 9,
"@type": "kg_provenance_record",
"reference_type": "constant",
"value": "9.66721"
}
],
"knowledge_graph": {
"team": [
{
"value": "Atalanta",
"key": "atalanta"
}
],
"city_name": [
{
"value": "Bergamo",
"key": "bergamo"
}
],
"stadium": [
{
"value": "Stadio Atleti Azzurri d'Italia",
"key": "stadio atleti azzurri d'italia"
}
],
"capacity": [
{
"value": 21300,
"key": "21300"
}
],
"population": [
{
"value": 114162,
"key": "114162"
}
],
"state": [
{
"value": "Lombardia",
"key": "lombardia"
}
],
"country": [
{
"value": "Italy",
"key": "italy"
}
],
"latitude": [
{
"value": "45.69601",
"key": "45.69601"
}
],
"longitude": [
{
"value": "9.66721",
"key": "9.66721"
}
]
},
"doc_id": "cd0741601f62119dbd8869375ea6758acb1921b9c5cf8dbf7d09e05360c21b50"
}
|
10/Numerical_optimization.ipynb | ###Markdown
Numerical optimization You will learn to solve non-convex multi-dimensional optimization problems using numerical optimization with multistart and nesting (**scipy.optimize**). You will learn simple function approximation using linear interpolation (**scipy.interp**). **Links:**1. **scipy.optimize:** [overview](https://docs.scipy.org/doc/scipy/reference/optimize.html) + [tutorial](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html)2. **scipy.interp:** [overview](https://docs.scipy.org/doc/scipy/reference/interpolate.html) + [tutorial](https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html) **Useful note:** [Numerical Optimization in MATLAB](http://web.econ.ku.dk/munk-nielsen/notes/noteOptimization.pdf) (by Anders Munk-Nielsen)
###Code
import numpy as np
import scipy as sp
from scipy import linalg
from scipy import optimize
from scipy import interpolate
import sympy as sm
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
###Output
_____no_output_____
###Markdown
Introduction All **optimization problems** are characterized by:1. Control vector (choices), $\boldsymbol{x} \in \mathbb{R}^k$2. Objective function (payoff) to minimize, $f:\mathbb{R}^k \rightarrow \mathbb{R}$ (differentiable or not)3. Constraints, i.e. $\boldsymbol{x} \in C \subseteq \mathbb{R}^k$ (linear or non-linear interdependence) **Maximization** is just **minimization** of $-f$. > Note that $f$ might also take other inputs (parameters or a dataset), but these are fixed, and therefore not variables we optimize over. All **optimizers** (minimizers) follow the structure:1. Make initial guess2. Evaluate the function (and perhaps gradients)3. Check for convergence4. Update guess and return to step 2 **Convergence:** "Small" change in function value since last iteration (or zero gradient). **Characteristics** of optimizers:1. Use gradients or not.2. Allow for specifying bounds.3. Allow for specifying general constraints. **Gradients** provide useful information, but can be costly to compute (using analytical formula or numerically). **Penalty terms** can (sometimes) instead be used to enforce bounds and constraints. **Optimizers** you should know:1. **Nelder-Mead:** * **Pro:** Robust (to e.g. noise in objective function) and does not require derivatives. * **Con:** Slow convergence. No bounds or constraints.* **Newton-CG:** * **Pro:** Require few iterations. Very precise with analytical hessian for smooth functions. * **Con:** Costly computation of hessian. No bounds or constraints.* **BFGS:** (like newton, but with smart computation of hessian) * **Pro:** Require few function evaluations. * **Con:** No bounds or constraints.* **L-BFGS-B:** Like BFGS, but allows for bounds.* **SLSQP:** * **Pro:** Bounds and constraints in multiple dimensions. * **Con:** Not as efficient as BFGS. Gradient based optimizers Let us look at the idea behind gradient based optimizers. **One dimensional intuition:** Consider the second-order Taylor approximation around $x_n$:$$ f_T(x) = f_T(x_n + \Delta x) \approx f(x_n)+ f^{\prime}(x_n) \Delta x + \frac{1}{2} f^{\prime\prime}(x_n) (\Delta x)^2$$Find the minimum wrt. to $\Delta x$ by solving the FOC:$$0 = \frac{d}{d\Delta x} f_T(x) = f^{\prime}(x_n) + f^{\prime\prime}(x_n) \Delta x \Leftrightarrow \Delta x = -\frac{f^{\prime}(x_n)}{f^{\prime\prime}(x_n)}$$ **Algorithm:** `minimize_newton()`1. Choose tolerance $\epsilon>0$, guess on $\boldsymbol{x}_0$, compute $f(\boldsymbol{x}_0)$, and set $n=1$.2. Compute $\nabla f(\boldsymbol{x}_{n-1})$ (gradient/jacobian) and $\boldsymbol{H}f(\boldsymbol{x}_{n-1})$ (hessian).3. Compute new guess $$ \boldsymbol{x}_{n} = \boldsymbol{x}_{n-1} - [\boldsymbol{H}f(\boldsymbol{x}_{n-1})]^{-1} \nabla f(\boldsymbol{x}_{n-1}) $$3. If $|f(\boldsymbol{x}_n)-f(\boldsymbol{x}_{n-1})| < \epsilon$ then stop.5. Set $n = n + 1$ and return to step 2.
###Code
def minimize_newton(f,x0,jac,hess,max_iter=500,tol=1e-8):
""" minimize function with Newtons' algorithm
Args:
f (callable): function
x0 (float): initial value
jac (callable): jacobian
hess (callable): hessian
max_iter (int): maximum number of iterations
tol (float): tolerance
Returns:
x (float): root
n (int): number of iterations used
"""
# step 1: initialize
x = x0
fx = f(x0)
n = 1
# step 2-5: iteration
while n < max_iter:
x_prev = x
fx_prev = fx
# step 2: evaluate gradient and hessian
jacx = jac(x_prev)
hessx = hess(x_prev)
# step 3: update x
inv_hessx = linalg.inv(hessx)
x = x_prev - inv_hessx@jacx
# step 4: check convergence
fx = f(x)
if abs(fx-fx_prev) < tol:
break
# step 5: increment n
n += 1
return x,n
###Output
_____no_output_____
###Markdown
**Algorithm:** `minimize_gradient_descent()`1. Choose tolerance $\epsilon>0$, potential step sizes, $ \boldsymbol{\alpha} = [\alpha_0,\alpha_1,\dots,\alpha_\]$, guess on $\boldsymbol{x}_0$, compute $f(\boldsymbol{x}_0)$, and set $n=1$.2. Compute $\nabla f(\boldsymbol{x}_{n-1})$.3. Find good step size: $$ \alpha^{\ast} = \arg \min_{\alpha \in \boldsymbol{\alpha}} f(\boldsymbol{x}_{n-1} - \alpha \nabla f(\boldsymbol{x}_{n-1}) $$4. Compute new guess: $$ \boldsymbol{x}_{n} = \boldsymbol{x}_{n-1} - \alpha^{\ast} \nabla f(\boldsymbol{x}_{n-1}) $$5. If $|f(\boldsymbol{x}_n)-f(\boldsymbol{x}_{n-1})| < \epsilon$ then stop.6. Set $n = n + 1$ and return to step 2.
###Code
def minimize_gradient_descent(f,x0,jac,alphas=[0.01,0.05,0.1,0.25,0.5,1],max_iter=500,tol=1e-8):
""" minimize function with gradient descent
Args:
f (callable): function
x0 (float): initial value
jac (callable): jacobian
alpha (list): potential step sizes
max_iter (int): maximum number of iterations
tol (float): tolerance
Returns:
x (float): root
n (int): number of iterations used
"""
# step 1: initialize
x = x0
fx = f(x0)
n = 1
# step 2-6: iteration
while n < max_iter:
x_prev = x
fx_prev = fx
# step 2: evaluate gradient
jacx = jac(x)
# step 3: find good step size (line search)
fx_ast = np.inf
alpha_ast = np.nan
for alpha in alphas:
x = x_prev - alpha*jacx
fx = f(x)
if fx < fx_ast:
fx_ast = fx
alpha_ast = alpha
# step 4: update guess
x = x_prev - alpha_ast*jacx
# step 5: check convergence
fx = f(x)
if abs(fx-fx_prev) < tol:
break
# d. update i
n += 1
return x,n
###Output
_____no_output_____
###Markdown
**Many generalizations:**1. Use both Hessian and line search2. Stop line search when improvement found3. Limit attention to a "trust-region" etc. etc. etc. etc. Example: The rosenbrock function Consider the **rosenbrock function**:$$ f(\boldsymbol{x}) = f(x_1,x_2) =0.5(1-x_{1})^{2}+(x_{2}-x_{1}^{2})^{2}$$with **jacobian** (gradient)$$ \nabla f(\boldsymbol{x})=\begin{bmatrix}\frac{\partial f}{\partial x_{1}}\\\frac{\partial f}{\partial x_{2}}\end{bmatrix}=\begin{bmatrix}-(1-x_{1})-4x_{1}(x_{2}-x_{1}^{2})\\2(x_{2}-x_{1}^{2})\end{bmatrix}$$and **hessian**:$$\boldsymbol{H}f(\boldsymbol{x})=\begin{bmatrix}\frac{\partial f}{\partial x_{1}x_{1}} & \frac{\partial f}{\partial x_{1}x_{2}}\\\frac{\partial f}{\partial x_{1}x_{2}} & \frac{\partial f}{\partial x_{2}x_{2}}\end{bmatrix}=\begin{bmatrix}1-4x_{2}+12x_{1}^{2} & -4x_{1}\\-4x_{1} & 2\end{bmatrix}$$**Note:** Minimum is at $(1,1)$ where $f(1,1)=0$. **Check jacobian and hessian:**
###Code
sm.init_printing(use_unicode=True)
x1 = sm.symbols('x_1')
x2 = sm.symbols('x_2')
f = 0.5*(1.0-x1)**2 + (x2-x1**2)**2
Df = sm.Matrix([sm.diff(f,i) for i in [x1,x2]])
Df
Hf = sm.Matrix([[sm.diff(f,i,j) for j in [x1,x2]] for i in [x1,x2]])
Hf
###Output
_____no_output_____
###Markdown
**Implementation:**
###Code
def _rosen(x1,x2):
return 0.5*(1.0-x1)**2+(x2-x1**2)**2
def rosen(x):
return _rosen(x[0],x[1])
def rosen_jac(x):
return np.array([-(1.0-x[0])-4*x[0]*(x[1]-x[0]**2),2*(x[1]-x[0]**2)])
def rosen_hess(x):
return np.array([[1-4*x[1]+12*x[0]**2,-4*x[0]],[-4*x[0],2]])
###Output
_____no_output_____
###Markdown
**3D Plot:**
###Code
# a. grids
x1_vec = np.linspace(-2,2,500)
x2_vec = np.linspace(-2,2,500)
x1_grid,x2_grid = np.meshgrid(x1_vec,x2_vec,indexing='ij')
rosen_grid = _rosen(x1_grid,x2_grid)
# b. main
fig = plt.figure()
ax = fig.add_subplot(1,1,1,projection='3d')
cs = ax.plot_surface(x1_grid,x2_grid,rosen_grid,cmap=cm.jet)
# c. add labels
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('$u$')
# d. invert xaxis
ax.invert_xaxis()
# e. add colorbar
fig.colorbar(cs);
###Output
_____no_output_____
###Markdown
**Contour plot:**
###Code
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
levels = [1e-6,5*1e-6,1e-5,5*1e-5,1e-4,5*1e-4,1e-3,5*1e-3,1e-2,5*1e-2,1,2,4,6,8,12,16,20]
cs = ax.contour(x1_grid,x2_grid,rosen_grid,levels=levels,cmap=cm.jet)
fig.colorbar(cs);
###Output
_____no_output_____
###Markdown
**Newton:**
###Code
x0 = np.array([5,4])
x,n = minimize_newton(rosen,x0,rosen_jac,rosen_hess)
print(n,x,rosen(x))
###Output
6 [1. 1.] 0.0
###Markdown
**Gradient descent:**
###Code
x0 = np.array([5,4])
x,n = minimize_gradient_descent(rosen,x0,rosen_jac,alphas=[0.01,0.05,0.1,0.25,0.5,1])
print(n,x,rosen(x))
###Output
173 [1.00020519 1.00053964] 3.7750814497569406e-08
###Markdown
Scipy minimizers **Preperation I:** Function for collecting infomation while running optimizing:
###Code
# complicated -> not necessary to understand it
def collect(x):
# globals used to keep track across iterations
global evals # set evals = 0 before calling optimizer
global x0
global x1s
global x2s
global fs
# a. initialize list
if evals == 0:
x1s = [x0[0]]
x2s = [x0[1]]
fs = [rosen(x0)]
# b. append trial values
x1s.append(x[0])
x2s.append(x[1])
fs.append(rosen(x))
# c. increment number of evaluations
evals += 1
###Output
_____no_output_____
###Markdown
**Preperation II:** Function plotting the collected information:
###Code
# complicated -> not necessary to understand it
def contour():
global evals
global x1s
global x2s
global fs
# a. contour plot
fig = plt.figure(figsize=(10,4))
ax = fig.add_subplot(1,2,1)
levels = [1e-6,5*1e-6,1e-5,5*1e-5,1e-4,5*1e-4,1e-3,5*1e-3,1e-2,5*1e-2,1,2,4,6,8,12,16,20]
cs = ax.contour(x1_grid,x2_grid,rosen_grid,levels=levels,cmap=cm.jet)
fig.colorbar(cs)
ax.plot(x1s,x2s,'-o',ms=4,color='black')
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
# b. function value
ax = fig.add_subplot(1,2,2)
ax.plot(np.arange(evals+1),fs,'-o',ms=4,color='black')
ax.set_xlabel('iteration')
ax.set_ylabel('function value')
###Output
_____no_output_____
###Markdown
**Nelder-Mead**
###Code
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(rosen,x0,
method='Nelder-Mead',
callback=collect, # call the collect() before each iteration
options={'disp':True}) # display the results
contour()
###Output
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 57
Function evaluations: 105
###Markdown
> **Note:** Does not require a gradient. Slow convergence close to target. **Newton** (with analytical hessian)
###Code
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(rosen,x0,jac=rosen_jac,hess=rosen_hess,
method='Newton-CG',
callback=collect,
options={'disp':True})
contour()
###Output
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 11
Function evaluations: 12
Gradient evaluations: 22
Hessian evaluations: 11
###Markdown
> **Note:** Smoother and faster. **Newton** (with numerical hessian computed by scipy)
###Code
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(rosen,x0,jac=rosen_jac,
method='Newton-CG',
callback=collect,
options={'disp':True})
contour()
###Output
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 11
Function evaluations: 12
Gradient evaluations: 58
Hessian evaluations: 0
###Markdown
> **Note:** Same as above, but gradient evaluations instead of hessian evaluations. **BFGS** (with analytical gradient)
###Code
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(rosen,x0,jac=rosen_jac,
method='BFGS',
callback=collect,
options={'disp':True})
contour()
###Output
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 13
Function evaluations: 14
Gradient evaluations: 14
###Markdown
> **Note:** Non-smooth, but fast. Very low number of function evaluations. **BFGS** (with numerical gradient computed by scipy)
###Code
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(rosen,x0,
method='BFGS',
callback=collect,
options={'disp':True})
contour()
###Output
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 13
Function evaluations: 56
Gradient evaluations: 14
###Markdown
> **Note:** Same as above, but more function evaluations. **L-BFGS-B** (with analytical gradient)
###Code
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(rosen,x0,jac=rosen_jac,
method='L-BFGS-B',
bounds=((-3,3),(-3,3)),
callback=collect,
options={'disp':True})
contour()
###Output
_____no_output_____
###Markdown
**SLSQP**
###Code
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(rosen,x0,jac=rosen_jac,
method='SLSQP',
bounds=((-2,2),(-2,2)),
callback=collect,
options={'disp':True})
contour()
###Output
Optimization terminated successfully. (Exit mode 0)
Current function value: 4.7296908855910356e-09
Iterations: 10
Function evaluations: 13
Gradient evaluations: 10
###Markdown
Controling the optimizers > **Note:** See the settings for each optimizer in the [documention](https://docs.scipy.org/doc/scipy/reference/optimize.html). We can lower the **tolerance**:
###Code
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(rosen,x0,
method='BFGS',
callback=collect,
options={'disp':True,'gtol':1e-2}) # note this
# contour()
###Output
Optimization terminated successfully.
Current function value: 0.000006
Iterations: 11
Function evaluations: 48
Gradient evaluations: 12
###Markdown
We can change the **maximum number of iterations**:
###Code
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(rosen,x0,
method='BFGS',
callback=collect,
options={'disp':True,'maxiter':5}) # note this and warning
contour()
###Output
Warning: Maximum number of iterations has been exceeded.
Current function value: 0.486266
Iterations: 5
Function evaluations: 24
Gradient evaluations: 6
###Markdown
Sombrero function: Local minima and multistart Consider the **sombrero** function$$f(x_1,x_2) = g\Big(\sqrt{x_1^2 + x_2^2}\Big)$$where$$g(r) = -\frac{\sin(r)}{r+10^{-4}} + 10^{-4}r^2$$The **global minimum** of this function is (0,0). But the function also have (infinitely many) **local minima**. How to avoid these?
###Code
def _sombrero(x1,x2):
r = np.sqrt(x1**2 + x2**2)
return -np.sin(r)/(r+1e-4) + 1e-4*r**2
sombrero = lambda x: _sombrero(x[0],x[1])
###Output
_____no_output_____
###Markdown
3D plot
###Code
# a. grids
x1_vec = np.linspace(-15,15,500)
x2_vec = np.linspace(-15,15,500)
x1_grid_sombrero,x2_grid_sombrero = np.meshgrid(x1_vec,x2_vec,indexing='ij')
sombrero_grid = _sombrero(x1_grid_sombrero,x2_grid_sombrero)
# b. main
fig = plt.figure()
ax = fig.add_subplot(1,1,1,projection='3d')
cs = ax.plot_surface(x1_grid_sombrero,x2_grid_sombrero,sombrero_grid,cmap=cm.jet)
# c. add labels
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('$u$')
# d. invert xaxis
ax.invert_xaxis()
# e. colorbar
fig.colorbar(cs);
###Output
_____no_output_____
###Markdown
Multistart - BFGS **Multi-start:** Draw many random starting values:
###Code
np.random.seed(1986)
x0s = -15 + 30*np.random.uniform(size=(5000,2)) # in [-15,15]
xs = np.empty((5000,2))
fs = np.empty(5000)
###Output
_____no_output_____
###Markdown
Try to solve with **BFGS** starting from each of these:
###Code
fopt = np.inf
xopt = np.nan
for i,x0 in enumerate(x0s):
# a. optimize
result = optimize.minimize(sombrero,x0,method='BFGS')
xs[i,:] = result.x
f = result.fun
# b. print first 10 or if better than seen yet
if i < 10 or f < fopt: # plot 10 first or if improving
if f < fopt:
fopt = f
xopt = xs[i,:]
print(f'{i:4d}: x0 = ({x0[0]:6.2f},{x0[0]:6.2f})',end='')
print(f' -> converged at ({xs[i][0]:6.2f},{xs[i][0]:6.2f}) with f = {f:.12f}')
# best solution
print(f'\nbest solution:\n x = ({xopt[0]:6.2f},{xopt[1]:6.2f}) -> f = {fopt:.12f}')
###Output
0: x0 = ( 2.07, 2.07) -> converged at ( 2.26, 2.26) with f = -0.051182722123
1: x0 = ( 3.25, 3.25) -> converged at ( 3.69, 3.69) with f = -0.051182722111
2: x0 = ( 1.35, 1.35) -> converged at ( 1.67, 1.67) with f = -0.122414211497
3: x0 = ( -3.42, -3.42) -> converged at ( -4.63, -4.63) with f = -0.122414211494
4: x0 = ( 5.70, 5.70) -> converged at ( 5.06, 5.06) with f = -0.122414211497
5: x0 = ( 5.71, 5.71) -> converged at ( 4.06, 4.06) with f = -0.122414211468
6: x0 = ( -6.49, -6.49) -> converged at ( -4.96, -4.96) with f = -0.122414211497
7: x0 = ( -5.77, -5.77) -> converged at ( -7.34, -7.34) with f = -0.122414211356
8: x0 = ( 1.58, 1.58) -> converged at ( 0.04, 0.04) with f = -0.997762360170
9: x0 = ( 2.33, 2.33) -> converged at ( 2.72, 2.72) with f = -0.051182722123
25: x0 = ( 1.41, 1.41) -> converged at ( 0.03, 0.03) with f = -0.997762360171
150: x0 = (-14.53,-14.53) -> converged at ( -0.06, -0.06) with f = -0.997762360171
223: x0 = ( 2.87, 2.87) -> converged at ( -0.07, -0.07) with f = -0.997762360171
279: x0 = ( -6.90, -6.90) -> converged at ( -0.04, -0.04) with f = -0.997762360171
354: x0 = ( -1.24, -1.24) -> converged at ( -0.03, -0.03) with f = -0.997762360171
1588: x0 = (-12.93,-12.93) -> converged at ( -0.05, -0.05) with f = -0.997762360171
4700: x0 = ( 1.06, 1.06) -> converged at ( 0.06, 0.06) with f = -0.997762360171
best solution:
x = ( 0.06, 0.04) -> f = -0.997762360171
###Markdown
The solver, wrongly, **converges to many of the local minima**:
###Code
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(xs[:,0],xs[:,1]);
###Output
_____no_output_____
###Markdown
Multistart - Nelder-Mead Try to solve with **Nelder-Mead** starting from each of these:
###Code
fopt = np.inf
xopt = np.nan
for i,x0 in enumerate(x0s):
# a. optimize
result = optimize.minimize(sombrero,x0,method='Nelder-Mead')
xs[i,:] = result.x
f = result.fun
# b. print first 10 or if better than seen yet
if i < 10 or f < fopt: # plot 10 first or if improving
if f < fopt:
fopt = f
xopt = xs[i,:]
print(f'{i:4d}: x0 = ({x0[0]:6.2f},{x0[0]:6.2f})',end='')
print(f' -> converged at ({xs[i][0]:6.2f},{xs[i][1]:6.2f}) with f = {f:.12f}')
# best solution
print(f'\nbest solution:\n x = ({xopt[0]:6.2f},{xopt[1]:6.2f}) -> f = {fopt:.12f}')
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(xs[:,0],xs[:,1]);
###Output
_____no_output_____
###Markdown
Constraints Consider the **constrained problem**: $$\min_{x_1,x_2,x_3,x_4} x_1x_4(x_1+x_2+x_3) + x_3$$subject to$$\begin{aligned}x_1x_2x_3x_4 &\geq 25 \\x_1^2x_2^2x_3^2x_4^2 &= 40 \\1 \leq x_1,x_2,x_3,x_4 &\leq 5\end{aligned}$$ Define **objective** and **constraints**:
###Code
def _objective(x1,x2,x3,x4):
return x1*x4*(x1+x2+x3)+x3
def objective(x):
return _objective(x[0],x[1],x[2],x[3])
def ineq_constraint(x):
return x[0]*x[1]*x[2]*x[3]-25.0
def eq_constraint(x):
sum_eq = 40.0
for i in range(4):
sum_eq = sum_eq - x[i]**2
return sum_eq
# a. setup
bound = (1.0,5.0)
bounds = (bound, bound, bound, bound)
ineq_con = {'type': 'ineq', 'fun': ineq_constraint}
eq_con = {'type': 'eq', 'fun': eq_constraint}
# b. call optimizer
x0 = (40**(1/8),40**(1/8),40**(1/8),40**(1/8)) # fit the equality constraint
result = optimize.minimize(objective,x0,
method='SLSQP',
bounds=bounds,
constraints=[ineq_con,eq_con],
options={'disp':True})
print('\nx = ',result.x)
###Output
Optimization terminated successfully. (Exit mode 0)
Current function value: 17.01401728904412
Iterations: 9
Function evaluations: 55
Gradient evaluations: 9
x = [1. 4.74299967 3.82114994 1.3794083 ]
###Markdown
**Alternative:** Extend the **objective function with a penalty term**, where guesses outside the allowed bounds and constraints are projected into the allowed region, but a (large) penalty term is added to discourage this. Solve this problem with an unconstrained solver. Not always easy to do in practice if there are multiple constraint (see lecture 3 for a simple example). Interpolation **Intermezzo:** To consider dynamic optimization problems, we need to think about interpolation. **Inputs:**1. Sorted vector of known points (grid vector), $G$2. Vector of known values (at these points), $F$3. A new point, `x` **Algorithm:** `linear_interpolate()`1. Determine `i` such that$$G_i \leq x < G_{i+1}$$2. Compute interpolated value by$$y = F_{i} + \frac{F_{i+1}-F_{i}}{G_{i+1}-G_{i}}(x-G_{i})$$ **Extrapolation:**1. Below where $x < G_0$: $$y = F_{0} + \frac{F_{1}-F_{0}}{G_{1}-G_{0}}(x-G_{0})$$2. Below where $x > G_n$: $$y = F_{n-1} + \frac{F_{n}-F_{n-1}}{G_{n}-G_{n-1}}(x-G_{n-1})$$
###Code
def linear_interpolate(G,F,x):
""" linear interpolation (and extrapolation)
Args:
G (ndarray): known points
F (ndarray): known values
x (float): point to be interpolated
Returns:
y (float): intepolated value
"""
assert len(G) == len(F)
n = len(G)
# a. find index in known points
if x < G[1]: # exprapolation below
i = 0
elif x > G[-2]: # extrapolation above
i = n-2
else: # true interpolation
# search
i = 0
while x >= G[i+1] and i < n-1:
i += 1
assert x >= G[i]
assert x < G[i+1]
# b. interpolate
diff_G = G[i+1]-G[i]
diff_F = F[i+1]-F[i]
slope = diff_F/diff_G
y = F[i] + slope*(x-G[i])
return y
###Output
_____no_output_____
###Markdown
Example Consider the following function and known points:
###Code
f = lambda x: (x-3)**3 - 3*x**2 + 5*x
G = np.linspace(-5,10,6)
F = f(G)
###Output
_____no_output_____
###Markdown
**Simple test:**
###Code
for x in [-2.3,4.1,7.5,9.1]:
true = f(x)
y = linear_interpolate(G,F,x)
print(f'x = {x:4.1f} -> true = {true:6.1f}, interpolated = {y:6.1f}')
###Output
x = -2.3 -> true = -176.2, interpolated = -193.5
x = 4.1 -> true = -28.6, interpolated = -27.7
x = 7.5 -> true = -40.1, interpolated = -24.5
x = 9.1 -> true = 24.1, interpolated = 50.7
###Markdown
**Scipy.interpolate:** Use the *RegularGridInterpolator*
###Code
# a. construct interpolation function
interp_func = interpolate.RegularGridInterpolator([G],F,
bounds_error=False,
fill_value=None)
# bounds_error=False and fill_value=None allow for extrapolation
# b. interpolate
grid = np.linspace(-7,12,500)
interp_values = interp_func(grid)
# c. evaluate true values
true_values = f(grid)
# d. plot true and interpolated values
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(G,F,'o',label='known points')
ax.plot(grid,true_values,'-',lw=1,label='true function')
ax.plot(grid,interp_values,'-',lw=1,label='interpolated values')
ax.legend(loc='lower right',facecolor='white',frameon=True);
###Output
_____no_output_____
###Markdown
**Note:**1. Linear interpolation works best when the function does not curve too much.2. Extrapolation is much worse than interpolation. **Multiple dimensions:** Same principle, ``interpolate.RegularGridInterpolator([G1,G2,G3],F)``. Dynamic optimization problems The following subject is hard. But also extremely useful. *If you master this, you can solve (almost) all economic models you meet on your way in life*. Problem formulation Consider a **household** living in two periods.In the **second period** it gets utility from **consuming** and **leaving a bequest** (warm glow),$$\begin{aligned}v_{2}(m_{2})&= \max_{c_{2}}\frac{c_{2}^{1-\rho}}{1-\rho}+\nu\frac{(m_{2}-c_{2}+\kappa)^{1-\rho}}{1-\rho}\\\text{s.t.} \\c_{2} &\in [0,m_{2}]\end{aligned}$$ where * $m_2$ is cash-on-hand * $c_2$ is consumption* $\rho > 1$ is the risk aversion coefficient* $\nu > 0 $ is the strength of the bequest motive* $\kappa > 0$ is the degree of luxuriousness in the bequest motive * the constraint ensures the household *cannot* die in debt The **value function** $v(m_2)$ measures the household's value of having $m_2$ at the beginning of period 2.
###Code
def utility(c,rho):
return c**(1-rho)/(1-rho)
def bequest(m,c,nu,kappa,rho):
return nu*(m-c+kappa)**(1-rho)/(1-rho)
def v2(c2,m2,rho,nu,kappa):
return utility(c2,rho) + bequest(m2,c2,nu,kappa,rho)
###Output
_____no_output_____
###Markdown
In the **first period**, the household gets utility from consuming and takes into account that it will also live in the next-period, where it receives a stochastic income,$$\begin{aligned}v_{1}(m_{1})&=\max_{c_{1}}\frac{c_{1}^{1-\rho}}{1-\rho}+\beta\mathbb{E}_{1}\left[v_2(m_2)\right]\\&\text{s.t.}&\\m_2&= (1+r)(m_{1}-c_{1})+y_{2} \\y_{2}&= \begin{cases}1-\Delta & \text{with prob. }0.5\\1+\Delta & \text{with prob. }0.5 \end{cases}\\c_{1}&\in [0,m_{1}]\\\end{aligned}$$ where* $m_1$ is cash-on-hand in period 1* $c_1$ is consumption in period 1* $\beta > 0$ is the discount factor* $\mathbb{E}_1$ is the expectation operator conditional on information in period 1* $y_2$ is income in period 2* $\Delta \in (0,1)$ is the level of income risk (mean-preserving)* $r$ is the return on savings* the last constraint ensures the household *cannot* borrow
###Code
def v1(c1,m1,rho,beta,r,Delta,v2_interp):
# a. v2 value, if low income
m2_low = (1+r)*(m1-c1) + 1-Delta
v2_low = v2_interp([m2_low])[0]
# b. v2 value, if high income
m2_high = (1+r)*(m1-c1) + 1+Delta
v2_high = v2_interp([m2_high])[0]
# c. expected v2 value
v2 = 0.5*v2_low + 0.5*v2_high
# d. total value
return utility(c1,rho) + beta*v2
###Output
_____no_output_____
###Markdown
Solve household problem Choose **parameters**:
###Code
rho = 8
kappa = 0.5
nu = 0.1
r = 0.04
beta = 0.94
Delta = 0.5
###Output
_____no_output_____
###Markdown
**Solve second period:**
###Code
def solve_period_2(rho,nu,kappa,Delta):
# a. grids
m2_vec = np.linspace(1e-8,5,500)
v2_vec = np.empty(500)
c2_vec = np.empty(500)
# b. solve for each m2 in grid
for i,m2 in enumerate(m2_vec):
# i. objective
obj = lambda c2: -v2(c2,m2,rho,nu,kappa)
# ii. initial value (consume half)
x0 = m2/2
# iii. optimizer
result = optimize.minimize_scalar(obj,x0,method='bounded',bounds=[1e-8,m2])
# iv. save
v2_vec[i] = -result.fun
c2_vec[i] = result.x
return m2_vec,v2_vec,c2_vec
# solve
m2_vec,v2_vec,c2_vec = solve_period_2(rho,nu,kappa,Delta)
# illustration
fig = plt.figure(figsize=(10,4))
ax = fig.add_subplot(1,2,1)
ax.plot(m2_vec,c2_vec)
ax.set_xlabel('$m_2$')
ax.set_ylabel('$c_2$')
ax.set_title('consumption function in period 2')
ax = fig.add_subplot(1,2,2)
ax.plot(m2_vec,v2_vec)
ax.set_xlabel('$m_2$')
ax.set_ylabel('$v_2$')
ax.set_title('value function in period 2')
ax.set_ylim([-40,1]);
###Output
_____no_output_____
###Markdown
**Question:** Why is there a kink in the consumption function? **Construct interpolator:**
###Code
v2_interp = interpolate.RegularGridInterpolator([m2_vec], v2_vec,
bounds_error=False,fill_value=None)
###Output
_____no_output_____
###Markdown
**Solve first period:**
###Code
def solve_period_1(rho,beta,r,Delta,v2_interp):
# a. grids
m1_vec = np.linspace(1e-8,4,100)
v1_vec = np.empty(100)
c1_vec = np.empty(100)
# b. solve for each m1 in grid
for i,m1 in enumerate(m1_vec):
# i. objective
obj = lambda c1: -v1(c1,m1,rho,beta,r,Delta,v2_interp)
# ii. initial guess (consume half)
x0 = m1*1/2
# iii. optimize
result = optimize.minimize_scalar(obj,x0,method='bounded',bounds=[1e-8,m1])
# iv. save
v1_vec[i] = -result.fun
c1_vec[i] = result.x
return m1_vec,v1_vec,c1_vec
# solve
m1_vec,v1_vec,c1_vec = solve_period_1(rho,beta,r,Delta,v2_interp)
# illustrate
fig = plt.figure(figsize=(10,4))
ax = fig.add_subplot(1,2,1)
ax.plot(m1_vec,c1_vec)
ax.set_xlabel('$m_1$')
ax.set_ylabel('$c_1$')
ax.set_title('consumption function in period 1')
ax = fig.add_subplot(1,2,2)
ax.plot(m1_vec,v1_vec)
ax.set_xlabel('$m_1$')
ax.set_ylabel('$c_1$')
ax.set_title('value function in period 1')
ax.set_ylim([-40,1]);
###Output
_____no_output_____
###Markdown
**Summary:** We can summarize what we have done in a single function doing:1. Solve period 2 (i.e. find $v_2(m_2)$ og $c_2(m_2)$)2. Construct interpolator of $v_2(m_2)$3. Solve period 1 (i.e. find $v_1(m_1)$ og $c_1(m_1)$)
###Code
def solve(rho,beta,r,Delta,nu,kappa):
# a. solve period 2
m2_vec,v2_vec,c2_vec = solve_period_2(rho,nu,kappa,Delta)
# b. construct interpolator
v2_interp = interpolate.RegularGridInterpolator([m2_vec], v2_vec,
bounds_error=False,fill_value=None)
# b. solve period 1
m1_vec,v1_vec,c1_vec = solve_period_1(rho,beta,r,Delta,v2_interp)
return m1_vec,c1_vec
###Output
_____no_output_____
###Markdown
**Plot consumption function for various level of income risk**, i.e varios $\Delta$
###Code
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
for Delta in [0.05,0.15,0.25]:
m1_vec,c1_vec = solve(rho,beta,r,Delta,nu,kappa)
ax.plot(m1_vec,c1_vec,label=f'$\Delta = {Delta}$')
ax.legend(loc='lower right',facecolor='white',frameon=True)
ax.set_xlabel('$m_1$')
ax.set_ylabel('$c_1$')
ax.set_title('value function in period 1')
ax.set_xlim([0,2])
ax.set_ylim([0,1.5]);
###Output
_____no_output_____ |
ML Notebook/Strain_Recommender.ipynb | ###Markdown
Load the Data
###Code
""" Import Statements """
# Classics
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.neighbors import NearestNeighbors
from sklearn.decomposition import PCA
import spacy
from spacy.tokenizer import Tokenizer
from nltk.stem import PorterStemmer
nlp = spacy.load("en_core_web_lg")
df = pd.read_csv("https://raw.githubusercontent.com/build-week-med-cabinet-2/ML_Model-Data/master/Cannabis_Strains_Features.csv")
df.head()
###Output
_____no_output_____
###Markdown
Tokenize Columns
###Code
# Merge all the text columns to make a all words columns
df['bag_of_words'] = df['Strain']+" "+df["Effects"] +" "+ df["Flavor"] +" "+ df['Description'] +" "+ df['Type']
df['bag_of_words'].head()
tokenizer = Tokenizer(nlp.vocab)
tokens = []
""" Make them tokens """
for doc in tokenizer.pipe(df['bag_of_words'], batch_size=500):
doc_tokens = [token.text for token in doc]
tokens.append(doc_tokens)
df['tokens'] = tokens
df['tokens'].head()
from sklearn.feature_extraction.text import TfidfVectorizer
# Instantiate vectorizer object
tfidf = TfidfVectorizer(stop_words='english')
# Create a vocabulary and get word counts per document
#Similar to fit_predict
dtm = tfidf.fit_transform(df['bag_of_words'])
# Print word counts
# Get feature names to use as dataframe column headers
dtm = pd.DataFrame(dtm.todense(), columns=tfidf.get_feature_names())
# View Feature Matrix as DataFrame
dtm.head()
###Output
_____no_output_____
###Markdown
KNN Model
###Code
# Instantiate
from sklearn.neighbors import NearestNeighbors
from sklearn.feature_extraction.text import TfidfVectorizer
# Fit on TF-IDF Vectors
nn = NearestNeighbors(n_neighbors=5, algorithm='kd_tree')
nn.fit(dtm)
# Query Using kneighbors
nn.kneighbors([dtm.iloc[378]])
df['bag_of_words'][378][:200]
df['bag_of_words'][929][:200]
###Output
_____no_output_____
###Markdown
Pickle the Model
###Code
from sklearn.externals import joblib
joblib.dump(nn, 'NN_MJrec.pkl')
from sklearn.externals import joblib
joblib.dump(tfidf, "tfidf.pkl")
###Output
_____no_output_____
###Markdown
Search Function
###Code
nn = joblib.load('NN_MJrec.pkl')
tfidf = joblib.load('tfidf.pkl')
def recommend(text):
# Transform
text = pd.Series(text)
vect = tfidf.transform(text)
# Send to df
vectdf = pd.DataFrame(vect.todense())
# Return a list of indexes
top5 = nn.kneighbors([vectdf][0], n_neighbors=5)[1][0].tolist()
# Send recomendations to DataFrame
recommendations_df = df.iloc[top5]
recommendations_df['index']= recommendations_df.index
return recommendations_df
recommend("I want to a feel like a lemon just cleaned my mouth and wants to have an adventure ")
###Output
C:\Users\navo1\Anaconda3\envs\U4-S1-NLP\lib\site-packages\ipykernel_launcher.py:16: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
app.launch_new_instance()
|
tutorials/streamlit_notebooks/healthcare/NER_EVENTS_CLINICAL.ipynb | ###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_EVENTS_CLINICAL.ipynb) **Detect clinical events** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/workshop_license_keys.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['JSL_SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['JSL_OCR_LICENSE'] = license_keys['JSL_OCR_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
jsl_version = secret.split('-')[0]
jsl_version
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
openjdk version "1.8.0_265"
OpenJDK Runtime Environment (build 1.8.0_265-8u265-b01-0ubuntu2~18.04-b01)
OpenJDK 64-Bit Server VM (build 25.265-b01, mixed mode)
Collecting spark-nlp
Using cached https://files.pythonhosted.org/packages/b5/a2/5c2e18a65784442ded6f6c58af175ca4d99649337de569fac55b04d7ed8e/spark_nlp-2.5.5-py2.py3-none-any.whl
Installing collected packages: spark-nlp
Successfully installed spark-nlp-2.5.5
###Markdown
Import dependencies into Python and start the Spark session
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
builder = SparkSession.builder \
.appName('Spark NLP Licensed') \
.master('local[*]') \
.config('spark.driver.memory', '16G') \
.config('spark.serializer', 'org.apache.spark.serializer.KryoSerializer') \
.config('spark.kryoserializer.buffer.max', '2000M') \
.config('spark.jars.packages', 'com.johnsnowlabs.nlp:spark-nlp_2.11:' +sparknlp.version()) \
.config('spark.jars', f'https://pypi.johnsnowlabs.com/{secret}/spark-nlp-jsl-{jsl_version}.jar')
spark = builder.getOrCreate()
###Output
_____no_output_____
###Markdown
2. Select the NER model and construct the pipeline Select the NER model - Clinical Events models: **ner_clinical_large, ner_events_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# You can change this to the model you want to use and re-run cells below.
# Clinical Events models: ner_clinical_large, ner_events_clinical
MODEL_NAME = "ner_clinical_large"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical_large download started this may take some time.
Approximate size to download 13.9 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""This is the case of a very pleasant 46-year-old Caucasian female with subarachnoid hemorrhage secondary to ruptured left posteroinferior cerebellar artery aneurysm, which was clipped. The patient last underwent a right frontal ventricular peritoneal shunt on 10/12/07. This resulted in relief of left chest pain, but the patient continued to complaint of persistent pain to the left shoulder and left elbow. She was seen in clinic on 12/11/07 during which time MRI of the left shoulder showed no evidence of rotator cuff tear. She did have a previous MRI of the cervical spine that did show an osteophyte on the left C6-C7 level. Based on this, negative MRI of the shoulder, the patient was recommended to have anterior cervical discectomy with anterior interbody fusion at C6-C7 level. Operation, expected outcome, risks, and benefits were discussed with her. Risks include, but not exclusive of bleeding and infection, bleeding could be soft tissue bleeding, which may compromise airway and may result in return to the operating room emergently for evacuation of said hematoma. There is also the possibility of bleeding into the epidural space, which can compress the spinal cord and result in weakness and numbness of all four extremities as well as impairment of bowel and bladder function. Should this occur, the patient understands that she needs to be brought emergently back to the operating room for evacuation of said hematoma. There is also the risk of infection, which can be superficial and can be managed with p.o. antibiotics. However, the patient may develop deeper-seated infection, which may require return to the operating room. Should the infection be in the area of the spinal instrumentation, this will cause a dilemma since there might be a need to remove the spinal instrumentation and/or allograft. There is also the possibility of potential injury to the esophageus, the trachea, and the carotid artery. There is also the risks of stroke on the right cerebral circulation should an undiagnosed plaque be propelled from the right carotid. There is also the possibility hoarseness of the voice secondary to injury to the recurrent laryngeal nerve. There is also the risk of pseudoarthrosis and hardware failure. She understood all of these risks and agreed to have the procedure performed."""
]
###Output
_____no_output_____
###Markdown
4. Use the pipeline to create outputs
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
5. Visualize results Visualize outputs as data frame
###Code
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
###Output
+--------------------------------------------------------+---------+
|chunk |ner_label|
+--------------------------------------------------------+---------+
|subarachnoid hemorrhage |PROBLEM |
|ruptured left posteroinferior cerebellar artery aneurysm|PROBLEM |
|clipped |TREATMENT|
|a right frontal ventricular peritoneal shunt |TREATMENT|
|left chest pain |PROBLEM |
|persistent pain to the left shoulder and left elbow |PROBLEM |
|MRI of the left shoulder |TEST |
|rotator cuff tear |PROBLEM |
|a previous MRI of the cervical spine |TEST |
|an osteophyte on the left C6-C7 level |PROBLEM |
|MRI of the shoulder |TEST |
|anterior cervical discectomy |TREATMENT|
|anterior interbody fusion |TREATMENT|
|bleeding |PROBLEM |
|infection |PROBLEM |
|bleeding |PROBLEM |
|soft tissue bleeding |PROBLEM |
|evacuation |TREATMENT|
|hematoma |PROBLEM |
|bleeding into the epidural space |PROBLEM |
+--------------------------------------------------------+---------+
only showing top 20 rows
###Markdown
Functions to display outputs as HTML
###Code
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
###Output
_____no_output_____
###Markdown
Display example outputs as HTML
###Code
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_EVENTS_CLINICAL.ipynb) **Detect clinical events** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
_____no_output_____
###Markdown
Import dependencies into Python and start the Spark session
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
###Output
_____no_output_____
###Markdown
2. Select the NER model and construct the pipeline Select the NER model - Clinical Events models: **ner_clinical_large, ner_events_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# You can change this to the model you want to use and re-run cells below.
# Clinical Events models: ner_clinical_large, ner_events_clinical
MODEL_NAME = "ner_clinical_large"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
_____no_output_____
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""This is the case of a very pleasant 46-year-old Caucasian female with subarachnoid hemorrhage secondary to ruptured left posteroinferior cerebellar artery aneurysm, which was clipped. The patient last underwent a right frontal ventricular peritoneal shunt on 10/12/07. This resulted in relief of left chest pain, but the patient continued to complaint of persistent pain to the left shoulder and left elbow. She was seen in clinic on 12/11/07 during which time MRI of the left shoulder showed no evidence of rotator cuff tear. She did have a previous MRI of the cervical spine that did show an osteophyte on the left C6-C7 level. Based on this, negative MRI of the shoulder, the patient was recommended to have anterior cervical discectomy with anterior interbody fusion at C6-C7 level. Operation, expected outcome, risks, and benefits were discussed with her. Risks include, but not exclusive of bleeding and infection, bleeding could be soft tissue bleeding, which may compromise airway and may result in return to the operating room emergently for evacuation of said hematoma. There is also the possibility of bleeding into the epidural space, which can compress the spinal cord and result in weakness and numbness of all four extremities as well as impairment of bowel and bladder function. Should this occur, the patient understands that she needs to be brought emergently back to the operating room for evacuation of said hematoma. There is also the risk of infection, which can be superficial and can be managed with p.o. antibiotics. However, the patient may develop deeper-seated infection, which may require return to the operating room. Should the infection be in the area of the spinal instrumentation, this will cause a dilemma since there might be a need to remove the spinal instrumentation and/or allograft. There is also the possibility of potential injury to the esophageus, the trachea, and the carotid artery. There is also the risks of stroke on the right cerebral circulation should an undiagnosed plaque be propelled from the right carotid. There is also the possibility hoarseness of the voice secondary to injury to the recurrent laryngeal nerve. There is also the risk of pseudoarthrosis and hardware failure. She understood all of these risks and agreed to have the procedure performed."""
]
###Output
_____no_output_____
###Markdown
4. Use the pipeline to create outputs
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
5. Visualize results Visualize outputs as data frame
###Code
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
###Output
_____no_output_____
###Markdown
Functions to display outputs as HTML
###Code
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
###Output
_____no_output_____
###Markdown
Display example outputs as HTML
###Code
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_EVENTS_CLINICAL.ipynb) **Detect clinical events** To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens.Otherwise, you can look at the example outputs at the bottom of the notebook. 1. Colab Setup Import license keys
###Code
import os
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
%%capture
for k,v in license_keys.items():
%set_env $k=$v
!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh
!bash jsl_colab_setup.sh
# Install Spark NLP Display for visualization
!pip install --ignore-installed spark-nlp-display
###Output
_____no_output_____
###Markdown
Import dependencies into Python and start the Spark session
###Code
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(license_keys['SECRET'])
# manually start session
# params = {"spark.driver.memory" : "16G",
# "spark.kryoserializer.buffer.max" : "2000M",
# "spark.driver.maxResultSize" : "2000M"}
# spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
###Output
_____no_output_____
###Markdown
2. Select the NER model and construct the pipeline Select the NER model - Clinical Events models: **ner_clinical_large, ner_events_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# You can change this to the model you want to use and re-run cells below.
# Clinical Events models: ner_clinical_large, ner_events_clinical
MODEL_NAME = "ner_clinical_large"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = MedicalNerModel.pretrained(MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"])\
.setOutputCol("ner")
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
###Output
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical_large download started this may take some time.
Approximate size to download 13.9 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""This is the case of a very pleasant 46-year-old Caucasian female with subarachnoid hemorrhage secondary to ruptured left posteroinferior cerebellar artery aneurysm, which was clipped. The patient last underwent a right frontal ventricular peritoneal shunt on 10/12/07. This resulted in relief of left chest pain, but the patient continued to complaint of persistent pain to the left shoulder and left elbow. She was seen in clinic on 12/11/07 during which time MRI of the left shoulder showed no evidence of rotator cuff tear. She did have a previous MRI of the cervical spine that did show an osteophyte on the left C6-C7 level. Based on this, negative MRI of the shoulder, the patient was recommended to have anterior cervical discectomy with anterior interbody fusion at C6-C7 level. Operation, expected outcome, risks, and benefits were discussed with her. Risks include, but not exclusive of bleeding and infection, bleeding could be soft tissue bleeding, which may compromise airway and may result in return to the operating room emergently for evacuation of said hematoma. There is also the possibility of bleeding into the epidural space, which can compress the spinal cord and result in weakness and numbness of all four extremities as well as impairment of bowel and bladder function. Should this occur, the patient understands that she needs to be brought emergently back to the operating room for evacuation of said hematoma. There is also the risk of infection, which can be superficial and can be managed with p.o. antibiotics. However, the patient may develop deeper-seated infection, which may require return to the operating room. Should the infection be in the area of the spinal instrumentation, this will cause a dilemma since there might be a need to remove the spinal instrumentation and/or allograft. There is also the possibility of potential injury to the esophageus, the trachea, and the carotid artery. There is also the risks of stroke on the right cerebral circulation should an undiagnosed plaque be propelled from the right carotid. There is also the possibility hoarseness of the voice secondary to injury to the recurrent laryngeal nerve. There is also the risk of pseudoarthrosis and hardware failure. She understood all of these risks and agreed to have the procedure performed."""
]
###Output
_____no_output_____
###Markdown
4. Use the pipeline to create outputs
###Code
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text': input_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
5. Visualize results
###Code
from sparknlp_display import NerVisualizer
NerVisualizer().display(
result = result.collect()[0],
label_col = 'ner_chunk',
document_col = 'document'
)
###Output
_____no_output_____
###Markdown
Visualize outputs as data frame
###Code
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
###Output
+--------------------------------------------------------+---------+
|chunk |ner_label|
+--------------------------------------------------------+---------+
|subarachnoid hemorrhage |PROBLEM |
|ruptured left posteroinferior cerebellar artery aneurysm|PROBLEM |
|clipped |TREATMENT|
|a right frontal ventricular peritoneal shunt |TREATMENT|
|left chest pain |PROBLEM |
|persistent pain to the left shoulder and left elbow |PROBLEM |
|MRI of the left shoulder |TEST |
|rotator cuff tear |PROBLEM |
|a previous MRI of the cervical spine |TEST |
|an osteophyte on the left C6-C7 level |PROBLEM |
|MRI of the shoulder |TEST |
|anterior cervical discectomy |TREATMENT|
|anterior interbody fusion |TREATMENT|
|bleeding |PROBLEM |
|infection |PROBLEM |
|bleeding |PROBLEM |
|soft tissue bleeding |PROBLEM |
|evacuation |TREATMENT|
|hematoma |PROBLEM |
|bleeding into the epidural space |PROBLEM |
+--------------------------------------------------------+---------+
only showing top 20 rows
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_EVENTS_CLINICAL.ipynb) **Detect clinical events** To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens.Otherwise, you can look at the example outputs at the bottom of the notebook. 1. Colab Setup Import license keys
###Code
import json
import os
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
# Defining license key-value pairs as local variables
locals().update(license_keys)
# Adding license key-value pairs to environment variables
os.environ.update(license_keys)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
# Installing pyspark and spark-nlp
! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION
# Installing Spark NLP Healthcare
! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET
# Installing Spark NLP Display Library for visualization
! pip install -q spark-nlp-display
###Output
_____no_output_____
###Markdown
Import dependencies into Python and start the Spark session
###Code
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
# manually start session
# params = {"spark.driver.memory" : "16G",
# "spark.kryoserializer.buffer.max" : "2000M",
# "spark.driver.maxResultSize" : "2000M"}
# spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
spark = sparknlp_jsl.start(license_keys['SECRET'])
spark
###Output
_____no_output_____
###Markdown
2. Select the NER model and construct the pipeline Select the NER model - Clinical Events models: **ner_clinical_large, ner_events_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# You can change this to the model you want to use and re-run cells below.
# Clinical Events models: ner_clinical_large, ner_events_clinical
MODEL_NAME = "ner_events_clinical"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = MedicalNerModel.pretrained(MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"])\
.setOutputCol("ner")
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
###Output
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_events_clinical download started this may take some time.
Approximate size to download 13.8 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""This is the case of a very pleasant 46-year-old Caucasian female with subarachnoid hemorrhage secondary to ruptured left posteroinferior cerebellar artery aneurysm, which was clipped. The patient last underwent a right frontal ventricular peritoneal shunt on 10/12/07. This resulted in relief of left chest pain, but the patient continued to complaint of persistent pain to the left shoulder and left elbow. She was seen in clinic on 12/11/07 during which time MRI of the left shoulder showed no evidence of rotator cuff tear. She did have a previous MRI of the cervical spine that did show an osteophyte on the left C6-C7 level. Based on this, negative MRI of the shoulder, the patient was recommended to have anterior cervical discectomy with anterior interbody fusion at C6-C7 level. Operation, expected outcome, risks, and benefits were discussed with her. Risks include, but not exclusive of bleeding and infection, bleeding could be soft tissue bleeding, which may compromise airway and may result in return to the operating room emergently for evacuation of said hematoma. There is also the possibility of bleeding into the epidural space, which can compress the spinal cord and result in weakness and numbness of all four extremities as well as impairment of bowel and bladder function. Should this occur, the patient understands that she needs to be brought emergently back to the operating room for evacuation of said hematoma. There is also the risk of infection, which can be superficial and can be managed with p.o. antibiotics. However, the patient may develop deeper-seated infection, which may require return to the operating room. Should the infection be in the area of the spinal instrumentation, this will cause a dilemma since there might be a need to remove the spinal instrumentation and/or allograft. There is also the possibility of potential injury to the esophageus, the trachea, and the carotid artery. There is also the risks of stroke on the right cerebral circulation should an undiagnosed plaque be propelled from the right carotid. There is also the possibility hoarseness of the voice secondary to injury to the recurrent laryngeal nerve. There is also the risk of pseudoarthrosis and hardware failure. She understood all of these risks and agreed to have the procedure performed."""
]
###Output
_____no_output_____
###Markdown
4. Use the pipeline to create outputs
###Code
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text': input_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
5. Visualize results
###Code
from sparknlp_display import NerVisualizer
NerVisualizer().display(
result = result.collect()[0],
label_col = 'ner_chunk',
document_col = 'document'
)
###Output
_____no_output_____
###Markdown
Visualize outputs as data frame
###Code
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
###Output
+--------------------------------------------------------+-------------+
|chunk |ner_label |
+--------------------------------------------------------+-------------+
|subarachnoid hemorrhage |PROBLEM |
|ruptured left posteroinferior cerebellar artery aneurysm|PROBLEM |
|clipped |TREATMENT |
|a right frontal ventricular peritoneal shunt |TREATMENT |
|10/12/07 |DATE |
|left chest pain |PROBLEM |
|complaint |EVIDENTIAL |
|persistent pain to the left shoulder and left elbow |PROBLEM |
|clinic |CLINICAL_DEPT|
|12/11/07 |DATE |
|which time MRI of the left shoulder |TEST |
|showed |EVIDENTIAL |
|rotator cuff tear |PROBLEM |
|a previous MRI of the cervical spine |TEST |
|show |EVIDENTIAL |
|an osteophyte on the left C6-C7 level |PROBLEM |
|MRI of the shoulder |TEST |
|anterior cervical discectomy |TREATMENT |
|anterior interbody fusion |TREATMENT |
|expected outcome |OCCURRENCE |
+--------------------------------------------------------+-------------+
only showing top 20 rows
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_EVENTS_CLINICAL.ipynb) **Detect clinical events** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/workshop_license_keys.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['secret']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['JSL_OCR_LICENSE'] = license_keys['JSL_OCR_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
# Install Java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark and SparkNLP
! pip install --ignore-installed -q pyspark==2.4.4
! python -m pip install --upgrade spark-nlp-jsl==2.5.2 --extra-index-url https://pypi.johnsnowlabs.com/$secret
! pip install --ignore-installed -q spark-nlp==2.5.2
###Output
_____no_output_____
###Markdown
Import dependencies into Python and start the Spark session
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import sparknlp
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
import pyspark.sql.functions as F
builder = SparkSession.builder \
.appName('Spark NLP Licensed') \
.master('local[*]') \
.config('spark.driver.memory', '16G') \
.config('spark.serializer', 'org.apache.spark.serializer.KryoSerializer') \
.config('spark.kryoserializer.buffer.max', '2000M') \
.config('spark.jars.packages', 'com.johnsnowlabs.nlp:spark-nlp_2.11:2.5.2') \
.config('spark.jars', f'https://pypi.johnsnowlabs.com/{secret}/spark-nlp-jsl-2.5.2.jar')
spark = builder.getOrCreate()
###Output
_____no_output_____
###Markdown
2. Select the NER model and construct the pipeline Select the NER model - Clinical Events models: **ner_clinical_large, ner_events_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# You can change this to the model you want to use and re-run cells below.
# Clinical Events models: ner_clinical_large, ner_events_clinical
MODEL_NAME = "ner_clinical_large"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
_____no_output_____
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""This is the case of a very pleasant 46-year-old Caucasian female with subarachnoid hemorrhage secondary to ruptured left posteroinferior cerebellar artery aneurysm, which was clipped. The patient last underwent a right frontal ventricular peritoneal shunt on 10/12/07. This resulted in relief of left chest pain, but the patient continued to complaint of persistent pain to the left shoulder and left elbow. She was seen in clinic on 12/11/07 during which time MRI of the left shoulder showed no evidence of rotator cuff tear. She did have a previous MRI of the cervical spine that did show an osteophyte on the left C6-C7 level. Based on this, negative MRI of the shoulder, the patient was recommended to have anterior cervical discectomy with anterior interbody fusion at C6-C7 level. Operation, expected outcome, risks, and benefits were discussed with her. Risks include, but not exclusive of bleeding and infection, bleeding could be soft tissue bleeding, which may compromise airway and may result in return to the operating room emergently for evacuation of said hematoma. There is also the possibility of bleeding into the epidural space, which can compress the spinal cord and result in weakness and numbness of all four extremities as well as impairment of bowel and bladder function. Should this occur, the patient understands that she needs to be brought emergently back to the operating room for evacuation of said hematoma. There is also the risk of infection, which can be superficial and can be managed with p.o. antibiotics. However, the patient may develop deeper-seated infection, which may require return to the operating room. Should the infection be in the area of the spinal instrumentation, this will cause a dilemma since there might be a need to remove the spinal instrumentation and/or allograft. There is also the possibility of potential injury to the esophageus, the trachea, and the carotid artery. There is also the risks of stroke on the right cerebral circulation should an undiagnosed plaque be propelled from the right carotid. There is also the possibility hoarseness of the voice secondary to injury to the recurrent laryngeal nerve. There is also the risk of pseudoarthrosis and hardware failure. She understood all of these risks and agreed to have the procedure performed."""
]
###Output
_____no_output_____
###Markdown
4. Use the pipeline to create outputs
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
5. Visualize results Visualize outputs as data frame
###Code
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
###Output
+------------------------------+---------+
|chunk |ner_label|
+------------------------------+---------+
|the cyst |Disease |
|a large Prolene suture |Disease |
|a very small incisional hernia|Disease |
|the hernia cavity |Disease |
|omentum |Disease |
|the hernia |Disease |
|the wound lesion |Disease |
|The lesion |Disease |
|the existing scar |Disease |
|the cyst |Disease |
|the wound |Disease |
|this cyst down to its base |Disease |
|a small incisional hernia |Disease |
|The cyst |Disease |
|The wound |Disease |
+------------------------------+---------+
###Markdown
Functions to display outputs as HTML
###Code
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
###Output
_____no_output_____
###Markdown
Display example outputs as HTML
###Code
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_EVENTS_CLINICAL.ipynb) **Detect clinical events** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
openjdk version "1.8.0_265"
OpenJDK Runtime Environment (build 1.8.0_265-8u265-b01-0ubuntu2~18.04-b01)
OpenJDK 64-Bit Server VM (build 25.265-b01, mixed mode)
Collecting spark-nlp
Using cached https://files.pythonhosted.org/packages/b5/a2/5c2e18a65784442ded6f6c58af175ca4d99649337de569fac55b04d7ed8e/spark_nlp-2.5.5-py2.py3-none-any.whl
Installing collected packages: spark-nlp
Successfully installed spark-nlp-2.5.5
###Markdown
Import dependencies into Python and start the Spark session
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
###Output
_____no_output_____
###Markdown
2. Select the NER model and construct the pipeline Select the NER model - Clinical Events models: **ner_clinical_large, ner_events_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# You can change this to the model you want to use and re-run cells below.
# Clinical Events models: ner_clinical_large, ner_events_clinical
MODEL_NAME = "ner_clinical_large"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical_large download started this may take some time.
Approximate size to download 13.9 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""This is the case of a very pleasant 46-year-old Caucasian female with subarachnoid hemorrhage secondary to ruptured left posteroinferior cerebellar artery aneurysm, which was clipped. The patient last underwent a right frontal ventricular peritoneal shunt on 10/12/07. This resulted in relief of left chest pain, but the patient continued to complaint of persistent pain to the left shoulder and left elbow. She was seen in clinic on 12/11/07 during which time MRI of the left shoulder showed no evidence of rotator cuff tear. She did have a previous MRI of the cervical spine that did show an osteophyte on the left C6-C7 level. Based on this, negative MRI of the shoulder, the patient was recommended to have anterior cervical discectomy with anterior interbody fusion at C6-C7 level. Operation, expected outcome, risks, and benefits were discussed with her. Risks include, but not exclusive of bleeding and infection, bleeding could be soft tissue bleeding, which may compromise airway and may result in return to the operating room emergently for evacuation of said hematoma. There is also the possibility of bleeding into the epidural space, which can compress the spinal cord and result in weakness and numbness of all four extremities as well as impairment of bowel and bladder function. Should this occur, the patient understands that she needs to be brought emergently back to the operating room for evacuation of said hematoma. There is also the risk of infection, which can be superficial and can be managed with p.o. antibiotics. However, the patient may develop deeper-seated infection, which may require return to the operating room. Should the infection be in the area of the spinal instrumentation, this will cause a dilemma since there might be a need to remove the spinal instrumentation and/or allograft. There is also the possibility of potential injury to the esophageus, the trachea, and the carotid artery. There is also the risks of stroke on the right cerebral circulation should an undiagnosed plaque be propelled from the right carotid. There is also the possibility hoarseness of the voice secondary to injury to the recurrent laryngeal nerve. There is also the risk of pseudoarthrosis and hardware failure. She understood all of these risks and agreed to have the procedure performed."""
]
###Output
_____no_output_____
###Markdown
4. Use the pipeline to create outputs
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
5. Visualize results Visualize outputs as data frame
###Code
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
###Output
+--------------------------------------------------------+---------+
|chunk |ner_label|
+--------------------------------------------------------+---------+
|subarachnoid hemorrhage |PROBLEM |
|ruptured left posteroinferior cerebellar artery aneurysm|PROBLEM |
|clipped |TREATMENT|
|a right frontal ventricular peritoneal shunt |TREATMENT|
|left chest pain |PROBLEM |
|persistent pain to the left shoulder and left elbow |PROBLEM |
|MRI of the left shoulder |TEST |
|rotator cuff tear |PROBLEM |
|a previous MRI of the cervical spine |TEST |
|an osteophyte on the left C6-C7 level |PROBLEM |
|MRI of the shoulder |TEST |
|anterior cervical discectomy |TREATMENT|
|anterior interbody fusion |TREATMENT|
|bleeding |PROBLEM |
|infection |PROBLEM |
|bleeding |PROBLEM |
|soft tissue bleeding |PROBLEM |
|evacuation |TREATMENT|
|hematoma |PROBLEM |
|bleeding into the epidural space |PROBLEM |
+--------------------------------------------------------+---------+
only showing top 20 rows
###Markdown
Functions to display outputs as HTML
###Code
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
###Output
_____no_output_____
###Markdown
Display example outputs as HTML
###Code
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_EVENTS_CLINICAL.ipynb) **Detect clinical events** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
openjdk version "1.8.0_265"
OpenJDK Runtime Environment (build 1.8.0_265-8u265-b01-0ubuntu2~18.04-b01)
OpenJDK 64-Bit Server VM (build 25.265-b01, mixed mode)
Collecting spark-nlp
Using cached https://files.pythonhosted.org/packages/b5/a2/5c2e18a65784442ded6f6c58af175ca4d99649337de569fac55b04d7ed8e/spark_nlp-2.5.5-py2.py3-none-any.whl
Installing collected packages: spark-nlp
Successfully installed spark-nlp-2.5.5
###Markdown
Import dependencies into Python and start the Spark session
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
###Output
_____no_output_____
###Markdown
2. Select the NER model and construct the pipeline Select the NER model - Clinical Events models: **ner_clinical_large, ner_events_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
# You can change this to the model you want to use and re-run cells below.
# Clinical Events models: ner_clinical_large, ner_events_clinical
MODEL_NAME = "ner_clinical_large"
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical_large download started this may take some time.
Approximate size to download 13.9 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""This is the case of a very pleasant 46-year-old Caucasian female with subarachnoid hemorrhage secondary to ruptured left posteroinferior cerebellar artery aneurysm, which was clipped. The patient last underwent a right frontal ventricular peritoneal shunt on 10/12/07. This resulted in relief of left chest pain, but the patient continued to complaint of persistent pain to the left shoulder and left elbow. She was seen in clinic on 12/11/07 during which time MRI of the left shoulder showed no evidence of rotator cuff tear. She did have a previous MRI of the cervical spine that did show an osteophyte on the left C6-C7 level. Based on this, negative MRI of the shoulder, the patient was recommended to have anterior cervical discectomy with anterior interbody fusion at C6-C7 level. Operation, expected outcome, risks, and benefits were discussed with her. Risks include, but not exclusive of bleeding and infection, bleeding could be soft tissue bleeding, which may compromise airway and may result in return to the operating room emergently for evacuation of said hematoma. There is also the possibility of bleeding into the epidural space, which can compress the spinal cord and result in weakness and numbness of all four extremities as well as impairment of bowel and bladder function. Should this occur, the patient understands that she needs to be brought emergently back to the operating room for evacuation of said hematoma. There is also the risk of infection, which can be superficial and can be managed with p.o. antibiotics. However, the patient may develop deeper-seated infection, which may require return to the operating room. Should the infection be in the area of the spinal instrumentation, this will cause a dilemma since there might be a need to remove the spinal instrumentation and/or allograft. There is also the possibility of potential injury to the esophageus, the trachea, and the carotid artery. There is also the risks of stroke on the right cerebral circulation should an undiagnosed plaque be propelled from the right carotid. There is also the possibility hoarseness of the voice secondary to injury to the recurrent laryngeal nerve. There is also the risk of pseudoarthrosis and hardware failure. She understood all of these risks and agreed to have the procedure performed."""
]
###Output
_____no_output_____
###Markdown
4. Use the pipeline to create outputs
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
###Output
_____no_output_____
###Markdown
5. Visualize results Visualize outputs as data frame
###Code
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
###Output
+--------------------------------------------------------+---------+
|chunk |ner_label|
+--------------------------------------------------------+---------+
|subarachnoid hemorrhage |PROBLEM |
|ruptured left posteroinferior cerebellar artery aneurysm|PROBLEM |
|clipped |TREATMENT|
|a right frontal ventricular peritoneal shunt |TREATMENT|
|left chest pain |PROBLEM |
|persistent pain to the left shoulder and left elbow |PROBLEM |
|MRI of the left shoulder |TEST |
|rotator cuff tear |PROBLEM |
|a previous MRI of the cervical spine |TEST |
|an osteophyte on the left C6-C7 level |PROBLEM |
|MRI of the shoulder |TEST |
|anterior cervical discectomy |TREATMENT|
|anterior interbody fusion |TREATMENT|
|bleeding |PROBLEM |
|infection |PROBLEM |
|bleeding |PROBLEM |
|soft tissue bleeding |PROBLEM |
|evacuation |TREATMENT|
|hematoma |PROBLEM |
|bleeding into the epidural space |PROBLEM |
+--------------------------------------------------------+---------+
only showing top 20 rows
###Markdown
Functions to display outputs as HTML
###Code
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
###Output
_____no_output_____
###Markdown
Display example outputs as HTML
###Code
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
###Output
_____no_output_____ |
notebooks/lfm/lfm.ipynb | ###Markdown
Latent factor models (LFM)This notebook contains examples of various kinds of latent factor models.(* denotes official TF2 tutorial, not part of pyprobml repo.)* PCA * [Medium post (with TF2 code) by Mukesh Mithrakumar](https://dev.to/mmithrakumar/principal-components-analysis-with-tensorflow-2-0-21hl) * [Sklearn PCA on iris data](https://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_iris.html)* Autoencoders * [Fashion](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/lfm/ae_fashion_tf.ipynb) * [CelebA](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/lfm/ae_vae_celeba_tf.ipynb) * [MNIST 2d](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/lfm/ae_mnist_2d_tf.ipynb)* VAE * [Fashion](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/lfm/vae_fashion_tf.ipynb) * [CelebA](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/lfm/vae_celeba_tf.ipynb) * [MNIST](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/lfm/vae_mnist_2d_tf.ipynb) * [MNIST *](https://www.tensorflow.org/tutorials/generative/cvae)
###Code
###Output
_____no_output_____ |
Rethinking_2/Chp_06.ipynb | ###Markdown
Chapter 6
###Code
import warnings
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc as pm
import seaborn as sns
from scipy import stats
from scipy.optimize import curve_fit
warnings.simplefilter(action="ignore", category=FutureWarning)
%config Inline.figure_format = 'retina'
az.style.use("arviz-darkgrid")
az.rcParams["stats.credible_interval"] = 0.89 # sets default credible interval used by arviz
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Code 6.1
###Code
np.random.seed(3)
N = 200 # num grant proposals
p = 0.1 # proportion to select
# uncorrelated newsworthiness and trustworthiness
nw = np.random.normal(size=N)
tw = np.random.normal(size=N)
# select top 10% of combined scores
s = nw + tw # total score
q = np.quantile(s, 1 - p) # top 10% threshold
selected = s >= q
cor = np.corrcoef(tw[selected], nw[selected])
cor
# Figure 6.1
plt.scatter(nw[~selected], tw[~selected], lw=1, edgecolor="k", color=(0, 0, 0, 0))
plt.scatter(nw[selected], tw[selected], color="C0")
plt.text(0.8, 2.5, "selected", color="C0")
# correlation line
xn = np.array([-2, 3])
plt.plot(xn, tw[selected].mean() + cor[0, 1] * (xn - nw[selected].mean()))
plt.xlabel("newsworthiness")
plt.ylabel("trustworthiness")
###Output
_____no_output_____
###Markdown
Code 6.2
###Code
N = 100 # number of individuals
height = np.random.normal(10, 2, N) # sim total height of each
leg_prop = np.random.uniform(0.4, 0.5, N) # leg as proportion of height
leg_left = leg_prop * height + np.random.normal(0, 0.02, N) # sim left leg as proportion + error
leg_right = leg_prop * height + np.random.normal(0, 0.02, N) # sim right leg as proportion + error
d = pd.DataFrame(
np.vstack([height, leg_left, leg_right]).T,
columns=["height", "leg_left", "leg_right"],
) # combine into data frame
d.head()
###Output
_____no_output_____
###Markdown
Code 6.3
###Code
with pm.Model() as m_6_1:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
br = pm.Normal("br", 2, 10)
mu = a + bl * d.leg_left + br * d.leg_right
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_1_trace = pm.sample()
idata_6_1 = az.from_pymc3(m_6_1_trace) # create an arviz InferenceData object from the trace.
# this happens automatically when calling az.summary, but as we'll be using this trace multiple
# times below it's more efficient to do the conversion once at the start.
az.summary(idata_6_1, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, br, bl, a]
Sampling 2 chains, 1 divergences: 100%|██████████| 2000/2000 [01:02<00:00, 32.23draws/s]
###Markdown
Code 6.4
###Code
_ = az.plot_forest(m_6_1_trace, var_names=["~mu"], combined=True, figsize=[5, 2])
###Output
_____no_output_____
###Markdown
Code 6.5 & 6.6Because we used MCMC (c.f. `quap`), the posterior samples are already in `m_6_1_trace`.
###Code
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=[7, 3])
# code 6.5
ax1.scatter(m_6_1_trace[br], m_6_1_trace[bl], alpha=0.05, s=20)
ax1.set_xlabel("br")
ax1.set_ylabel("bl")
# code 6.6
az.plot_kde(m_6_1_trace[br] + m_6_1_trace[bl], ax=ax2)
ax2.set_ylabel("Density")
ax2.set_xlabel("sum of bl and br");
###Output
_____no_output_____
###Markdown
Code 6.7
###Code
with pm.Model() as m_6_2:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
mu = a + bl * d.leg_left
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_2_trace = pm.sample()
idata_m_6_2 = az.from_pymc3(m_6_2_trace)
az.summary(idata_m_6_2, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bl, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 766.84draws/s]
###Markdown
Code 6.8
###Code
d = pd.read_csv("Data/milk.csv", sep=";")
def standardise(series):
"""Standardize a pandas series"""
return (series - series.mean()) / series.std()
d.loc[:, "K"] = standardise(d["kcal.per.g"])
d.loc[:, "F"] = standardise(d["perc.fat"])
d.loc[:, "L"] = standardise(d["perc.lactose"])
d.head()
###Output
_____no_output_____
###Markdown
Code 6.9
###Code
# kcal.per.g regressed on perc.fat
with pm.Model() as m_6_3:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
mu = a + bF * d.F
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_3_trace = pm.sample()
idata_m_6_3 = az.from_pymc3(m_6_3_trace)
az.summary(idata_m_6_3, round_to=2)
# kcal.per.g regressed on perc.lactose
with pm.Model() as m_6_4:
a = pm.Normal("a", 0, 0.2)
bL = pm.Normal("bF", 0, 0.5)
mu = a + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_4_trace = pm.sample()
idata_m_6_4 = az.from_pymc3(m_6_4_trace)
az.summary(idata_m_6_4, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1843.00draws/s]
###Markdown
Code 6.10
###Code
with pm.Model() as m_6_5:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
bL = pm.Normal("bL", 0, 0.5)
mu = a + bF * d.F + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_5_trace = pm.sample()
idata_m_6_5 = az.from_pymc3(m_6_5_trace)
az.summary(idata_m_6_5, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bL, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 927.51draws/s]
###Markdown
Code 6.11
###Code
sns.pairplot(d.loc[:, ["kcal.per.g", "perc.fat", "perc.lactose"]]);
###Output
_____no_output_____
###Markdown
Code 6.12
###Code
def mv(x, a, b, c):
return a + x[0] * b + x[1] * c
def sim_coll(r=0.9):
x = np.random.normal(loc=r * d["perc.fat"], scale=np.sqrt((1 - r ** 2) * np.var(d["perc.fat"])))
_, cov = curve_fit(mv, (d["perc.fat"], x), d["kcal.per.g"])
return np.sqrt(np.diag(cov))[-1]
def rep_sim_coll(r=0.9, n=100):
return np.mean([sim_coll(r) for i in range(n)])
r_seq = np.arange(0, 1, 0.01)
stdev = list(map(rep_sim_coll, r_seq))
plt.scatter(r_seq, stdev)
plt.xlabel("correlation")
plt.ylabel("standard deviation of slope");
###Output
_____no_output_____
###Markdown
Code 6.13
###Code
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.14
###Code
sim_p = np.random.lognormal(0, 0.25, int(1e4))
az.summary(sim_p, kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.15
###Code
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample()
az.summary(m_6_6_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, p]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:00<00:00, 2025.34draws/s]
###Markdown
Code 6.16
###Code
with pm.Model() as m_6_7:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample()
az.summary(m_6_7_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bf, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1125.58draws/s]
The acceptance probability does not match the target. It is 0.8936683270553085, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.17
###Code
with pm.Model() as m_6_8:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample()
az.summary(m_6_8_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1494.36draws/s]
The acceptance probability does not match the target. It is 0.8808811481735465, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.18Using [`causalgraphicalmodels`](https://github.com/ijmbarr/causalgraphicalmodels) for graph drawing and analysis instead of `dagitty`, following the example of [ksachdeva's Tensorflow version of Rethinking](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
import daft
from causalgraphicalmodels import CausalGraphicalModel
plant_dag = CausalGraphicalModel(
nodes=["H0", "H1", "F", "T"], edges=[("H0", "H1"), ("F", "H1"), ("T", "F")]
)
pgm = daft.PGM()
coordinates = {"H0": (0, 0), "T": (4, 0), "F": (3, 0), "H1": (2, 0)}
for node in plant_dag.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in plant_dag.dag.edges:
pgm.add_edge(*edge)
pgm.render()
plt.gca().invert_yaxis()
###Output
_____no_output_____
###Markdown
Code 6.19Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = plant_dag.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
###Output
('T', 'H1', {'F'})
('T', 'H0', set())
('H0', 'F', set())
###Markdown
Code 6.20
###Code
N = 1000
h0 = np.random.normal(10, 2, N)
treatment = np.repeat([0, 1], N / 2)
M = np.random.binomial(1, 0.5, size=N) # assumed probability 0.5 here, as not given in book
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4 + 0.4 * M, size=N)
h1 = h0 + np.random.normal(5 + 3 * M, size=N)
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Re-run m_6_6 and m_6_7 on this dataset Code 6.21Including a python implementation of the sim_happiness function
###Code
def inv_logit(x):
return np.exp(x) / (1 + np.exp(x))
def sim_happiness(N_years=100, seed=1234):
np.random.seed(seed)
popn = pd.DataFrame(np.zeros((20 * 65, 3)), columns=["age", "happiness", "married"])
popn.loc[:, "age"] = np.repeat(np.arange(65), 20)
popn.loc[:, "happiness"] = np.repeat(np.linspace(-2, 2, 20), 65)
popn.loc[:, "married"] = np.array(popn.loc[:, "married"].values, dtype="bool")
for i in range(N_years):
# age population
popn.loc[:, "age"] += 1
# replace old folk with new folk
ind = popn.age == 65
popn.loc[ind, "age"] = 0
popn.loc[ind, "married"] = False
popn.loc[ind, "happiness"] = np.linspace(-2, 2, 20)
# do the work
elligible = (popn.married == 0) & (popn.age >= 18)
marry = np.random.binomial(1, inv_logit(popn.loc[elligible, "happiness"] - 4)) == 1
popn.loc[elligible, "married"] = marry
popn.sort_values("age", inplace=True, ignore_index=True)
return popn
popn = sim_happiness()
popn_summ = popn.copy()
popn_summ["married"] = popn_summ["married"].astype(
int
) # this is necessary before using az.summary, which doesn't work with boolean columns.
az.summary(popn_summ.to_dict(orient="list"), kind="stats", round_to=2)
# Figure 6.4
fig, ax = plt.subplots(figsize=[10, 3.4])
colors = np.array(["w"] * popn.shape[0])
colors[popn.married] = "b"
ax.scatter(popn.age, popn.happiness, edgecolor="k", color=colors)
ax.scatter([], [], edgecolor="k", color="w", label="unmarried")
ax.scatter([], [], edgecolor="k", color="b", label="married")
ax.legend(loc="upper left", framealpha=1, frameon=True)
ax.set_xlabel("age")
ax.set_ylabel("hapiness");
###Output
_____no_output_____
###Markdown
Code 6.22
###Code
adults = popn.loc[popn.age > 17]
adults.loc[:, "A"] = (adults["age"].copy() - 18) / (65 - 18)
###Output
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:845: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[key] = _infer_fill_value(value)
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item] = s
###Markdown
Code 6.23
###Code
mid = pd.Categorical(adults.loc[:, "married"].astype(int))
with pm.Model() as m_6_9:
a = pm.Normal("a", 0, 1, shape=2)
bA = pm.Normal("bA", 0, 2)
mu = a[mid] + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
m_6_9_trace = pm.sample(1000)
az.summary(m_6_9_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:03<00:00, 811.62draws/s]
###Markdown
Code 6.24
###Code
with pm.Model() as m6_10:
a = pm.Normal("a", 0, 1)
bA = pm.Normal("bA", 0, 2)
mu = a + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
trace_6_10 = pm.sample(1000)
az.summary(trace_6_10, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:02<00:00, 1397.67draws/s]
###Markdown
Code 6.25
###Code
N = 200 # number of of grandparent-parent-child triads
b_GP = 1 # direct effect of G on P
b_GC = 0 # direct effect of G on C
b_PC = 1 # direct effect of P on C
b_U = 2 # direct effect of U on P and C
###Output
_____no_output_____
###Markdown
Code 6.26
###Code
U = 2 * np.random.binomial(1, 0.5, N) - 1
G = np.random.normal(size=N)
P = np.random.normal(b_GP * G + b_U * U)
C = np.random.normal(b_PC * P + b_GC * G + b_U * U)
d = pd.DataFrame.from_dict({"C": C, "P": P, "G": G, "U": U})
# Figure 6.5
# grandparent education
bad = U < 0
good = ~bad
plt.scatter(G[good], C[good], color="w", lw=1, edgecolor="C0")
plt.scatter(G[bad], C[bad], color="w", lw=1, edgecolor="k")
# parents with similar education
eP = (P > -1) & (P < 1)
plt.scatter(G[good & eP], C[good & eP], color="C0", lw=1, edgecolor="C0")
plt.scatter(G[bad & eP], C[bad & eP], color="k", lw=1, edgecolor="k")
p = np.polyfit(G[eP], C[eP], 1)
xn = np.array([-2, 3])
plt.plot(xn, np.polyval(p, xn))
plt.xlabel("grandparent education (G)")
plt.ylabel("grandchild education (C)")
###Output
_____no_output_____
###Markdown
Code 6.27
###Code
with pm.Model() as m_6_11:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_11_trace = pm.sample()
az.summary(m_6_11_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1373.03draws/s]
###Markdown
Code 6.28
###Code
with pm.Model() as m_6_12:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
p_U = pm.Normal("b_U", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G + p_U * d.U
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_12_trace = pm.sample()
az.summary(m_6_12_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_U, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 713.85draws/s]
###Markdown
Code 6.29Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_1 = CausalGraphicalModel(
nodes=["X", "Y", "C", "U", "B", "A"],
edges=[
("X", "Y"),
("U", "X"),
("A", "U"),
("A", "C"),
("C", "Y"),
("U", "B"),
("C", "B"),
],
)
all_adjustment_sets = dag_6_1.get_all_backdoor_adjustment_sets("X", "Y")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
if s != {"U"}:
print(s)
###Output
frozenset({'A'})
frozenset({'C'})
###Markdown
Code 6.30Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_2 = CausalGraphicalModel(
nodes=["S", "A", "D", "M", "W"],
edges=[
("S", "A"),
("A", "D"),
("S", "M"),
("M", "D"),
("S", "W"),
("W", "D"),
("A", "M"),
],
)
all_adjustment_sets = dag_6_2.get_all_backdoor_adjustment_sets("W", "D")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
print(s)
###Output
frozenset({'S'})
frozenset({'M', 'A'})
###Markdown
Code 6.31Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = dag_6_2.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
seaborn 0.10.1
numpy 1.18.1
arviz 0.7.0
pandas 1.0.3
daft 0.1.0
pymc 3.8
last updated: Sun May 10 2020
CPython 3.7.6
IPython 7.13.0
watermark 2.0.2
###Markdown
Chapter 6
###Code
import warnings
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from scipy import stats
from scipy.optimize import curve_fit
warnings.simplefilter(action="ignore", category=FutureWarning)
%config Inline.figure_format = 'retina'
az.style.use("arviz-darkgrid")
az.rcParams["stats.credible_interval"] = 0.89 # sets default credible interval used by arviz
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Code 6.1
###Code
np.random.seed(3)
N = 200 # num grant proposals
p = 0.1 # proportion to select
# uncorrelated newsworthiness and trustworthiness
nw = np.random.normal(size=N)
tw = np.random.normal(size=N)
# select top 10% of combined scores
s = nw + tw # total score
q = np.quantile(s, 1 - p) # top 10% threshold
selected = s >= q
cor = np.corrcoef(tw[selected], nw[selected])
cor
# Figure 6.1
plt.scatter(nw[~selected], tw[~selected], lw=1, edgecolor="k", color=(0, 0, 0, 0))
plt.scatter(nw[selected], tw[selected], color="C0")
plt.text(0.8, 2.5, "selected", color="C0")
# correlation line
xn = np.array([-2, 3])
plt.plot(xn, tw[selected].mean() + cor[0, 1] * (xn - nw[selected].mean()))
plt.xlabel("newsworthiness")
plt.ylabel("trustworthiness")
###Output
_____no_output_____
###Markdown
Code 6.2
###Code
N = 100 # number of individuals
height = np.random.normal(10, 2, N) # sim total height of each
leg_prop = np.random.uniform(0.4, 0.5, N) # leg as proportion of height
leg_left = leg_prop * height + np.random.normal(0, 0.02, N) # sim left leg as proportion + error
leg_right = leg_prop * height + np.random.normal(0, 0.02, N) # sim right leg as proportion + error
d = pd.DataFrame(
np.vstack([height, leg_left, leg_right]).T,
columns=["height", "leg_left", "leg_right"],
) # combine into data frame
d.head()
###Output
_____no_output_____
###Markdown
Code 6.3
###Code
with pm.Model() as m_6_1:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
br = pm.Normal("br", 2, 10)
mu = a + bl * d.leg_left + br * d.leg_right
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_1_trace = pm.sample()
idata_6_1 = az.from_pymc3(m_6_1_trace) # create an arviz InferenceData object from the trace.
# this happens automatically when calling az.summary, but as we'll be using this trace multiple
# times below it's more efficient to do the conversion once at the start.
az.summary(idata_6_1, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, br, bl, a]
Sampling 2 chains, 1 divergences: 100%|██████████| 2000/2000 [01:02<00:00, 32.23draws/s]
###Markdown
Code 6.4
###Code
_ = az.plot_forest(m_6_1_trace, var_names=["~mu"], combined=True, figsize=[5, 2])
###Output
_____no_output_____
###Markdown
Code 6.5 & 6.6Because we used MCMC (c.f. `quap`), the posterior samples are already in `m_6_1_trace`.
###Code
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=[7, 3])
# code 6.5
ax1.scatter(m_6_1_trace[br], m_6_1_trace[bl], alpha=0.05, s=20)
ax1.set_xlabel("br")
ax1.set_ylabel("bl")
# code 6.6
az.plot_kde(m_6_1_trace[br] + m_6_1_trace[bl], ax=ax2)
ax2.set_ylabel("Density")
ax2.set_xlabel("sum of bl and br");
###Output
_____no_output_____
###Markdown
Code 6.7
###Code
with pm.Model() as m_6_2:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
mu = a + bl * d.leg_left
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_2_trace = pm.sample()
idata_m_6_2 = az.from_pymc3(m_6_2_trace)
az.summary(idata_m_6_2, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bl, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 766.84draws/s]
###Markdown
Code 6.8
###Code
d = pd.read_csv("Data/milk.csv", sep=";")
def standardise(series):
"""Standardize a pandas series"""
return (series - series.mean()) / series.std()
d.loc[:, "K"] = standardise(d["kcal.per.g"])
d.loc[:, "F"] = standardise(d["perc.fat"])
d.loc[:, "L"] = standardise(d["perc.lactose"])
d.head()
###Output
_____no_output_____
###Markdown
Code 6.9
###Code
# kcal.per.g regressed on perc.fat
with pm.Model() as m_6_3:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
mu = a + bF * d.F
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_3_trace = pm.sample()
idata_m_6_3 = az.from_pymc3(m_6_3_trace)
az.summary(idata_m_6_3, round_to=2)
# kcal.per.g regressed on perc.lactose
with pm.Model() as m_6_4:
a = pm.Normal("a", 0, 0.2)
bL = pm.Normal("bF", 0, 0.5)
mu = a + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_4_trace = pm.sample()
idata_m_6_4 = az.from_pymc3(m_6_4_trace)
az.summary(idata_m_6_4, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1843.00draws/s]
###Markdown
Code 6.10
###Code
with pm.Model() as m_6_5:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
bL = pm.Normal("bL", 0, 0.5)
mu = a + bF * d.F + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_5_trace = pm.sample()
idata_m_6_5 = az.from_pymc3(m_6_5_trace)
az.summary(idata_m_6_5, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bL, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 927.51draws/s]
###Markdown
Code 6.11
###Code
sns.pairplot(d.loc[:, ["kcal.per.g", "perc.fat", "perc.lactose"]]);
###Output
_____no_output_____
###Markdown
Code 6.12
###Code
def mv(x, a, b, c):
return a + x[0] * b + x[1] * c
def sim_coll(r=0.9):
x = np.random.normal(loc=r * d["perc.fat"], scale=np.sqrt((1 - r ** 2) * np.var(d["perc.fat"])))
_, cov = curve_fit(mv, (d["perc.fat"], x), d["kcal.per.g"])
return np.sqrt(np.diag(cov))[-1]
def rep_sim_coll(r=0.9, n=100):
return np.mean([sim_coll(r) for i in range(n)])
r_seq = np.arange(0, 1, 0.01)
stdev = list(map(rep_sim_coll, r_seq))
plt.scatter(r_seq, stdev)
plt.xlabel("correlation")
plt.ylabel("standard deviation of slope");
###Output
_____no_output_____
###Markdown
Code 6.13
###Code
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.14
###Code
sim_p = np.random.lognormal(0, 0.25, int(1e4))
az.summary(sim_p, kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.15
###Code
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample()
az.summary(m_6_6_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, p]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:00<00:00, 2025.34draws/s]
###Markdown
Code 6.16
###Code
with pm.Model() as m_6_7:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample()
az.summary(m_6_7_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bf, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1125.58draws/s]
The acceptance probability does not match the target. It is 0.8936683270553085, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.17
###Code
with pm.Model() as m_6_8:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample()
az.summary(m_6_8_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1494.36draws/s]
The acceptance probability does not match the target. It is 0.8808811481735465, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.18Using [`causalgraphicalmodels`](https://github.com/ijmbarr/causalgraphicalmodels) for graph drawing and analysis instead of `dagitty`, following the example of [ksachdeva's Tensorflow version of Rethinking](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
import daft
from causalgraphicalmodels import CausalGraphicalModel
plant_dag = CausalGraphicalModel(
nodes=["H0", "H1", "F", "T"], edges=[("H0", "H1"), ("F", "H1"), ("T", "F")]
)
pgm = daft.PGM()
coordinates = {"H0": (0, 0), "T": (4, 0), "F": (3, 0), "H1": (2, 0)}
for node in plant_dag.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in plant_dag.dag.edges:
pgm.add_edge(*edge)
pgm.render()
plt.gca().invert_yaxis()
###Output
_____no_output_____
###Markdown
Code 6.19Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = plant_dag.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
###Output
('T', 'H1', {'F'})
('T', 'H0', set())
('H0', 'F', set())
###Markdown
Code 6.20
###Code
N = 1000
h0 = np.random.normal(10, 2, N)
treatment = np.repeat([0, 1], N / 2)
M = np.random.binomial(1, 0.5, size=N) # assumed probability 0.5 here, as not given in book
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4 + 0.4 * M, size=N)
h1 = h0 + np.random.normal(5 + 3 * M, size=N)
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Re-run m_6_6 and m_6_7 on this dataset Code 6.21Including a python implementation of the sim_happiness function
###Code
def inv_logit(x):
return np.exp(x) / (1 + np.exp(x))
def sim_happiness(N_years=100, seed=1234):
np.random.seed(seed)
popn = pd.DataFrame(np.zeros((20 * 65, 3)), columns=["age", "happiness", "married"])
popn.loc[:, "age"] = np.repeat(np.arange(65), 20)
popn.loc[:, "happiness"] = np.repeat(np.linspace(-2, 2, 20), 65)
popn.loc[:, "married"] = np.array(popn.loc[:, "married"].values, dtype="bool")
for i in range(N_years):
# age population
popn.loc[:, "age"] += 1
# replace old folk with new folk
ind = popn.age == 65
popn.loc[ind, "age"] = 0
popn.loc[ind, "married"] = False
popn.loc[ind, "happiness"] = np.linspace(-2, 2, 20)
# do the work
elligible = (popn.married == 0) & (popn.age >= 18)
marry = np.random.binomial(1, inv_logit(popn.loc[elligible, "happiness"] - 4)) == 1
popn.loc[elligible, "married"] = marry
popn.sort_values("age", inplace=True, ignore_index=True)
return popn
popn = sim_happiness()
popn_summ = popn.copy()
popn_summ["married"] = popn_summ["married"].astype(
int
) # this is necessary before using az.summary, which doesn't work with boolean columns.
az.summary(popn_summ.to_dict(orient="list"), kind="stats", round_to=2)
# Figure 6.4
fig, ax = plt.subplots(figsize=[10, 3.4])
colors = np.array(["w"] * popn.shape[0])
colors[popn.married] = "b"
ax.scatter(popn.age, popn.happiness, edgecolor="k", color=colors)
ax.scatter([], [], edgecolor="k", color="w", label="unmarried")
ax.scatter([], [], edgecolor="k", color="b", label="married")
ax.legend(loc="upper left", framealpha=1, frameon=True)
ax.set_xlabel("age")
ax.set_ylabel("hapiness");
###Output
_____no_output_____
###Markdown
Code 6.22
###Code
adults = popn.loc[popn.age > 17]
adults.loc[:, "A"] = (adults["age"].copy() - 18) / (65 - 18)
###Output
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:845: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[key] = _infer_fill_value(value)
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item] = s
###Markdown
Code 6.23
###Code
mid = pd.Categorical(adults.loc[:, "married"].astype(int))
with pm.Model() as m_6_9:
a = pm.Normal("a", 0, 1, shape=2)
bA = pm.Normal("bA", 0, 2)
mu = a[mid] + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
m_6_9_trace = pm.sample(1000)
az.summary(m_6_9_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:03<00:00, 811.62draws/s]
###Markdown
Code 6.24
###Code
with pm.Model() as m6_10:
a = pm.Normal("a", 0, 1)
bA = pm.Normal("bA", 0, 2)
mu = a + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
trace_6_10 = pm.sample(1000)
az.summary(trace_6_10, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:02<00:00, 1397.67draws/s]
###Markdown
Code 6.25
###Code
N = 200 # number of of grandparent-parent-child triads
b_GP = 1 # direct effect of G on P
b_GC = 0 # direct effect of G on C
b_PC = 1 # direct effect of P on C
b_U = 2 # direct effect of U on P and C
###Output
_____no_output_____
###Markdown
Code 6.26
###Code
U = 2 * np.random.binomial(1, 0.5, N) - 1
G = np.random.normal(size=N)
P = np.random.normal(b_GP * G + b_U * U)
C = np.random.normal(b_PC * P + b_GC * G + b_U * U)
d = pd.DataFrame.from_dict({"C": C, "P": P, "G": G, "U": U})
# Figure 6.5
# grandparent education
bad = U < 0
good = ~bad
plt.scatter(G[good], C[good], color="w", lw=1, edgecolor="C0")
plt.scatter(G[bad], C[bad], color="w", lw=1, edgecolor="k")
# parents with similar education
eP = (P > -1) & (P < 1)
plt.scatter(G[good & eP], C[good & eP], color="C0", lw=1, edgecolor="C0")
plt.scatter(G[bad & eP], C[bad & eP], color="k", lw=1, edgecolor="k")
p = np.polyfit(G[eP], C[eP], 1)
xn = np.array([-2, 3])
plt.plot(xn, np.polyval(p, xn))
plt.xlabel("grandparent education (G)")
plt.ylabel("grandchild education (C)")
###Output
_____no_output_____
###Markdown
Code 6.27
###Code
with pm.Model() as m_6_11:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_11_trace = pm.sample()
az.summary(m_6_11_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1373.03draws/s]
###Markdown
Code 6.28
###Code
with pm.Model() as m_6_12:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
p_U = pm.Normal("b_U", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G + p_U * d.U
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_12_trace = pm.sample()
az.summary(m_6_12_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_U, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 713.85draws/s]
###Markdown
Code 6.29Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_1 = CausalGraphicalModel(
nodes=["X", "Y", "C", "U", "B", "A"],
edges=[
("X", "Y"),
("U", "X"),
("A", "U"),
("A", "C"),
("C", "Y"),
("U", "B"),
("C", "B"),
],
)
all_adjustment_sets = dag_6_1.get_all_backdoor_adjustment_sets("X", "Y")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
if s != {"U"}:
print(s)
###Output
frozenset({'A'})
frozenset({'C'})
###Markdown
Code 6.30Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_2 = CausalGraphicalModel(
nodes=["S", "A", "D", "M", "W"],
edges=[
("S", "A"),
("A", "D"),
("S", "M"),
("M", "D"),
("S", "W"),
("W", "D"),
("A", "M"),
],
)
all_adjustment_sets = dag_6_2.get_all_backdoor_adjustment_sets("W", "D")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
print(s)
###Output
frozenset({'S'})
frozenset({'M', 'A'})
###Markdown
Code 6.31Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = dag_6_2.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
seaborn 0.10.1
numpy 1.18.1
arviz 0.7.0
pandas 1.0.3
daft 0.1.0
pymc3 3.8
last updated: Sun May 10 2020
CPython 3.7.6
IPython 7.13.0
watermark 2.0.2
###Markdown
Chapter 6
###Code
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from scipy import stats
from scipy.optimize import curve_fit
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
%config Inline.figure_format = 'retina'
az.style.use('arviz-darkgrid')
az.rcParams["stats.credible_interval"] = 0.89 # sets default credible interval used by arviz
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Code 6.1
###Code
np.random.seed(3)
N = 200 # num grant proposals
p = 0.1 # proportion to select
# uncorrelated newsworthiness and trustworthiness
nw = np.random.normal(size=N)
tw = np.random.normal(size=N)
# select top 10% of combined scores
s = nw + tw # total score
q = np.quantile(s, 1 - p) # top 10% threshold
selected = s >= q
cor = np.corrcoef(tw[selected], nw[selected])
cor
# Figure 6.1
plt.scatter(nw[~selected], tw[~selected], lw=1, edgecolor="k", color=(0, 0, 0, 0))
plt.scatter(nw[selected], tw[selected], color="C0")
plt.text(0.8, 2.5, "selected", color="C0")
# correlation line
xn = np.array([-2, 3])
plt.plot(xn, tw[selected].mean() + cor[0, 1] * (xn - nw[selected].mean()))
plt.xlabel("newsworthiness")
plt.ylabel("trustworthiness")
###Output
_____no_output_____
###Markdown
Code 6.2
###Code
N = 100 # number of individuals
height = np.random.normal(10, 2, N) # sim total height of each
leg_prop = np.random.uniform(0.4, 0.5, N) # leg as proportion of height
leg_left = leg_prop * height + np.random.normal(
0, 0.02, N
) # sim left leg as proportion + error
leg_right = leg_prop * height + np.random.normal(
0, 0.02, N
) # sim right leg as proportion + error
d = pd.DataFrame(
np.vstack([height, leg_left, leg_right]).T,
columns=["height", "leg_left", "leg_right"],
) # combine into data frame
d.head()
###Output
_____no_output_____
###Markdown
Code 6.3
###Code
with pm.Model() as m_6_1:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
br = pm.Normal("br", 2, 10)
mu = a + bl * d.leg_left + br * d.leg_right
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_1_trace = pm.sample()
idata_6_1 = az.from_pymc3(
m_6_1_trace
) # create an arviz InferenceData object from the trace.
# this happens automatically when calling az.summary, but as we'll be using this trace multiple
# times below it's more efficient to do the conversion once at the start.
az.summary(idata_6_1, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, br, bl, a]
Sampling 2 chains, 1 divergences: 100%|██████████| 2000/2000 [01:02<00:00, 32.23draws/s]
###Markdown
Code 6.4
###Code
_ = az.plot_forest(m_6_1_trace, var_names=["~mu"], combined=True, figsize=[5, 2])
###Output
_____no_output_____
###Markdown
Code 6.5 & 6.6Because we used MCMC (c.f. `quap`), the posterior samples are already in `m_6_1_trace`.
###Code
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=[7, 3])
# code 6.5
ax1.scatter(m_6_1_trace[br], m_6_1_trace[bl], alpha=0.05, s=20)
ax1.set_xlabel("br")
ax1.set_ylabel("bl")
# code 6.6
az.plot_kde(m_6_1_trace[br] + m_6_1_trace[bl], ax=ax2)
ax2.set_ylabel("Density")
ax2.set_xlabel("sum of bl and br");
###Output
_____no_output_____
###Markdown
Code 6.7
###Code
with pm.Model() as m_6_2:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
mu = a + bl * d.leg_left
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_2_trace = pm.sample()
idata_m_6_2 = az.from_pymc3(m_6_2_trace)
az.summary(idata_m_6_2, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bl, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 766.84draws/s]
###Markdown
Code 6.8
###Code
d = pd.read_csv("Data/milk.csv", sep=";")
def standardise(series):
"""Standardize a pandas series"""
return (series - series.mean()) / series.std()
d.loc[:, "K"] = standardise(d["kcal.per.g"])
d.loc[:, "F"] = standardise(d["perc.fat"])
d.loc[:, "L"] = standardise(d["perc.lactose"])
d.head()
###Output
_____no_output_____
###Markdown
Code 6.9
###Code
# kcal.per.g regressed on perc.fat
with pm.Model() as m_6_3:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
mu = a + bF * d.F
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_3_trace = pm.sample()
idata_m_6_3 = az.from_pymc3(m_6_3_trace)
az.summary(idata_m_6_3, round_to=2)
# kcal.per.g regressed on perc.lactose
with pm.Model() as m_6_4:
a = pm.Normal("a", 0, 0.2)
bL = pm.Normal("bF", 0, 0.5)
mu = a + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_4_trace = pm.sample()
idata_m_6_4 = az.from_pymc3(m_6_4_trace)
az.summary(idata_m_6_4, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1843.00draws/s]
###Markdown
Code 6.10
###Code
with pm.Model() as m_6_5:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
bL = pm.Normal("bL", 0, 0.5)
mu = a + bF * d.F + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_5_trace = pm.sample()
idata_m_6_5 = az.from_pymc3(m_6_5_trace)
az.summary(idata_m_6_5, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bL, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 927.51draws/s]
###Markdown
Code 6.11
###Code
sns.pairplot(d.loc[:, ["kcal.per.g", "perc.fat", "perc.lactose"]]);
###Output
_____no_output_____
###Markdown
Code 6.12
###Code
def mv(x, a, b, c):
return a + x[0] * b + x[1] * c
def sim_coll(r=0.9):
x = np.random.normal(
loc=r * d["perc.fat"], scale=np.sqrt((1 - r ** 2) * np.var(d["perc.fat"]))
)
_, cov = curve_fit(mv, (d["perc.fat"], x), d["kcal.per.g"])
return np.sqrt(np.diag(cov))[-1]
def rep_sim_coll(r=0.9, n=100):
return np.mean([sim_coll(r) for i in range(n)])
r_seq = np.arange(0, 1, 0.01)
stdev = list(map(rep_sim_coll, r_seq))
plt.scatter(r_seq, stdev)
plt.xlabel("correlation")
plt.ylabel("standard deviation of slope");
###Output
_____no_output_____
###Markdown
Code 6.13
###Code
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict(
{"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus}
)
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.14
###Code
sim_p = np.random.lognormal(0, 0.25, int(1e4))
az.summary(sim_p, kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.15
###Code
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample()
az.summary(m_6_6_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, p]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:00<00:00, 2025.34draws/s]
###Markdown
Code 6.16
###Code
with pm.Model() as m_6_7:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample()
az.summary(m_6_7_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bf, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1125.58draws/s]
The acceptance probability does not match the target. It is 0.8936683270553085, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.17
###Code
with pm.Model() as m_6_8:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample()
az.summary(m_6_8_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1494.36draws/s]
The acceptance probability does not match the target. It is 0.8808811481735465, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.18Using [`causalgraphicalmodels`](https://github.com/ijmbarr/causalgraphicalmodels) for graph drawing and analysis instead of `dagitty`, following the example of [ksachdeva's Tensorflow version of Rethinking](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
import daft
from causalgraphicalmodels import CausalGraphicalModel
plant_dag = CausalGraphicalModel(
nodes=["H0", "H1", "F", "T"], edges=[("H0", "H1"), ("F", "H1"), ("T", "F")]
)
pgm = daft.PGM()
coordinates = {"H0": (0, 0), "T": (4, 0), "F": (3, 0), "H1": (2, 0)}
for node in plant_dag.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in plant_dag.dag.edges:
pgm.add_edge(*edge)
pgm.render()
plt.gca().invert_yaxis()
###Output
_____no_output_____
###Markdown
Code 6.19Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = plant_dag.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
###Output
('T', 'H1', {'F'})
('T', 'H0', set())
('H0', 'F', set())
###Markdown
Code 6.20
###Code
N = 1000
h0 = np.random.normal(10, 2, N)
treatment = np.repeat([0, 1], N / 2)
M = np.random.binomial(
1, 0.5, size=N
) # assumed probability 0.5 here, as not given in book
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4 + 0.4 * M, size=N)
h1 = h0 + np.random.normal(5 + 3 * M, size=N)
d = pd.DataFrame.from_dict(
{"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus}
)
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Re-run m_6_6 and m_6_7 on this dataset Code 6.21Including a python implementation of the sim_happiness function
###Code
def inv_logit(x):
return np.exp(x) / (1 + np.exp(x))
def sim_happiness(N_years=100, seed=1234):
np.random.seed(seed)
popn = pd.DataFrame(np.zeros((20 * 65, 3)), columns=["age", "happiness", "married"])
popn.loc[:, "age"] = np.repeat(np.arange(65), 20)
popn.loc[:, "happiness"] = np.repeat(np.linspace(-2, 2, 20), 65)
popn.loc[:, "married"] = np.array(popn.loc[:, "married"].values, dtype="bool")
for i in range(N_years):
# age population
popn.loc[:, "age"] += 1
# replace old folk with new folk
ind = popn.age == 65
popn.loc[ind, "age"] = 0
popn.loc[ind, "married"] = False
popn.loc[ind, "happiness"] = np.linspace(-2, 2, 20)
# do the work
elligible = (popn.married == 0) & (popn.age >= 18)
marry = (
np.random.binomial(1, inv_logit(popn.loc[elligible, "happiness"] - 4)) == 1
)
popn.loc[elligible, "married"] = marry
popn.sort_values("age", inplace=True, ignore_index=True)
return popn
popn = sim_happiness()
popn_summ = popn.copy()
popn_summ["married"] = popn_summ["married"].astype(
int
) # this is necessary before using az.summary, which doesn't work with boolean columns.
az.summary(popn_summ.to_dict(orient="list"), kind="stats", round_to=2)
# Figure 6.4
fig, ax = plt.subplots(figsize=[10, 3.4])
colors = np.array(["w"] * popn.shape[0])
colors[popn.married] = "b"
ax.scatter(popn.age, popn.happiness, edgecolor="k", color=colors)
ax.scatter([], [], edgecolor="k", color="w", label="unmarried")
ax.scatter([], [], edgecolor="k", color="b", label="married")
ax.legend(loc="upper left", framealpha=1, frameon=True)
ax.set_xlabel("age")
ax.set_ylabel("hapiness");
###Output
_____no_output_____
###Markdown
Code 6.22
###Code
adults = popn.loc[popn.age > 17]
adults.loc[:, "A"] = (adults["age"].copy() - 18) / (65 - 18)
###Output
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:845: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[key] = _infer_fill_value(value)
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item] = s
###Markdown
Code 6.23
###Code
mid = pd.Categorical(adults.loc[:, "married"].astype(int))
with pm.Model() as m_6_9:
a = pm.Normal("a", 0, 1, shape=2)
bA = pm.Normal("bA", 0, 2)
mu = a[mid] + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
m_6_9_trace = pm.sample(1000)
az.summary(m_6_9_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:03<00:00, 811.62draws/s]
###Markdown
Code 6.24
###Code
with pm.Model() as m6_10:
a = pm.Normal("a", 0, 1)
bA = pm.Normal("bA", 0, 2)
mu = a + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
trace_6_10 = pm.sample(1000)
az.summary(trace_6_10, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:02<00:00, 1397.67draws/s]
###Markdown
Code 6.25
###Code
N = 200 # number of of grandparent-parent-child triads
b_GP = 1 # direct effect of G on P
b_GC = 0 # direct effect of G on C
b_PC = 1 # direct effect of P on C
b_U = 2 # direct effect of U on P and C
###Output
_____no_output_____
###Markdown
Code 6.26
###Code
U = 2 * np.random.binomial(1, 0.5, N) - 1
G = np.random.normal(size=N)
P = np.random.normal(b_GP * G + b_U * U)
C = np.random.normal(b_PC * P + b_GC * G + b_U * U)
d = pd.DataFrame.from_dict({"C": C, "P": P, "G": G, "U": U})
# Figure 6.5
# grandparent education
bad = U < 0
good = ~bad
plt.scatter(G[good], C[good], color="w", lw=1, edgecolor="C0")
plt.scatter(G[bad], C[bad], color="w", lw=1, edgecolor="k")
# parents with similar education
eP = (P > -1) & (P < 1)
plt.scatter(G[good & eP], C[good & eP], color="C0", lw=1, edgecolor="C0")
plt.scatter(G[bad & eP], C[bad & eP], color="k", lw=1, edgecolor="k")
p = np.polyfit(G[eP], C[eP], 1)
xn = np.array([-2, 3])
plt.plot(xn, np.polyval(p, xn))
plt.xlabel("grandparent education (G)")
plt.ylabel("grandchild education (C)")
###Output
_____no_output_____
###Markdown
Code 6.27
###Code
with pm.Model() as m_6_11:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_11_trace = pm.sample()
az.summary(m_6_11_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1373.03draws/s]
###Markdown
Code 6.28
###Code
with pm.Model() as m_6_12:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
p_U = pm.Normal("b_U", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G + p_U * d.U
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_12_trace = pm.sample()
az.summary(m_6_12_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_U, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 713.85draws/s]
###Markdown
Code 6.29Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_1 = CausalGraphicalModel(
nodes=["X", "Y", "C", "U", "B", "A"],
edges=[
("X", "Y"),
("U", "X"),
("A", "U"),
("A", "C"),
("C", "Y"),
("U", "B"),
("C", "B"),
],
)
all_adjustment_sets = dag_6_1.get_all_backdoor_adjustment_sets("X", "Y")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
if s != {"U"}:
print(s)
###Output
frozenset({'A'})
frozenset({'C'})
###Markdown
Code 6.30Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_2 = CausalGraphicalModel(
nodes=["S", "A", "D", "M", "W"],
edges=[
("S", "A"),
("A", "D"),
("S", "M"),
("M", "D"),
("S", "W"),
("W", "D"),
("A", "M"),
],
)
all_adjustment_sets = dag_6_2.get_all_backdoor_adjustment_sets("W", "D")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
print(s)
###Output
frozenset({'S'})
frozenset({'M', 'A'})
###Markdown
Code 6.31Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = dag_6_2.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
seaborn 0.10.1
numpy 1.18.1
arviz 0.7.0
pandas 1.0.3
daft 0.1.0
pymc3 3.8
last updated: Sun May 10 2020
CPython 3.7.6
IPython 7.13.0
watermark 2.0.2
###Markdown
Chapter 6
###Code
import warnings
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc as pm
import seaborn as sns
from scipy import stats
from scipy.optimize import curve_fit
warnings.simplefilter(action="ignore", category=FutureWarning)
%config Inline.figure_format = 'retina'
az.style.use("arviz-darkgrid")
az.rcParams["stats.credible_interval"] = 0.89 # sets default credible interval used by arviz
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Code 6.1
###Code
np.random.seed(3)
N = 200 # num grant proposals
p = 0.1 # proportion to select
# uncorrelated newsworthiness and trustworthiness
nw = np.random.normal(size=N)
tw = np.random.normal(size=N)
# select top 10% of combined scores
s = nw + tw # total score
q = np.quantile(s, 1 - p) # top 10% threshold
selected = s >= q
cor = np.corrcoef(tw[selected], nw[selected])
cor
# Figure 6.1
plt.scatter(nw[~selected], tw[~selected], lw=1, edgecolor="k", color=(0, 0, 0, 0))
plt.scatter(nw[selected], tw[selected], color="C0")
plt.text(0.8, 2.5, "selected", color="C0")
# correlation line
xn = np.array([-2, 3])
plt.plot(xn, tw[selected].mean() + cor[0, 1] * (xn - nw[selected].mean()))
plt.xlabel("newsworthiness")
plt.ylabel("trustworthiness")
###Output
_____no_output_____
###Markdown
Code 6.2
###Code
N = 100 # number of individuals
height = np.random.normal(10, 2, N) # sim total height of each
leg_prop = np.random.uniform(0.4, 0.5, N) # leg as proportion of height
leg_left = leg_prop * height + np.random.normal(0, 0.02, N) # sim left leg as proportion + error
leg_right = leg_prop * height + np.random.normal(0, 0.02, N) # sim right leg as proportion + error
d = pd.DataFrame(
np.vstack([height, leg_left, leg_right]).T,
columns=["height", "leg_left", "leg_right"],
) # combine into data frame
d.head()
###Output
_____no_output_____
###Markdown
Code 6.3
###Code
with pm.Model() as m_6_1:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
br = pm.Normal("br", 2, 10)
mu = a + bl * d.leg_left + br * d.leg_right
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_1_trace = pm.sample()
idata_6_1 = az.from_pymc3(m_6_1_trace) # create an arviz InferenceData object from the trace.
# this happens automatically when calling az.summary, but as we'll be using this trace multiple
# times below it's more efficient to do the conversion once at the start.
az.summary(idata_6_1, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, br, bl, a]
Sampling 2 chains, 1 divergences: 100%|██████████| 2000/2000 [01:02<00:00, 32.23draws/s]
###Markdown
Code 6.4
###Code
_ = az.plot_forest(m_6_1_trace, var_names=["~mu"], combined=True, figsize=[5, 2])
###Output
_____no_output_____
###Markdown
Code 6.5 & 6.6Because we used MCMC (c.f. `quap`), the posterior samples are already in `m_6_1_trace`.
###Code
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=[7, 3])
# code 6.5
ax1.scatter(m_6_1_trace[br], m_6_1_trace[bl], alpha=0.05, s=20)
ax1.set_xlabel("br")
ax1.set_ylabel("bl")
# code 6.6
az.plot_kde(m_6_1_trace[br] + m_6_1_trace[bl], ax=ax2)
ax2.set_ylabel("Density")
ax2.set_xlabel("sum of bl and br");
###Output
_____no_output_____
###Markdown
Code 6.7
###Code
with pm.Model() as m_6_2:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
mu = a + bl * d.leg_left
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_2_trace = pm.sample()
idata_m_6_2 = az.from_pymc3(m_6_2_trace)
az.summary(idata_m_6_2, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bl, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 766.84draws/s]
###Markdown
Code 6.8
###Code
d = pd.read_csv("Data/milk.csv", sep=";")
def standardise(series):
"""Standardize a pandas series"""
return (series - series.mean()) / series.std()
d.loc[:, "K"] = standardise(d["kcal.per.g"])
d.loc[:, "F"] = standardise(d["perc.fat"])
d.loc[:, "L"] = standardise(d["perc.lactose"])
d.head()
###Output
_____no_output_____
###Markdown
Code 6.9
###Code
# kcal.per.g regressed on perc.fat
with pm.Model() as m_6_3:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
mu = a + bF * d.F
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_3_trace = pm.sample()
idata_m_6_3 = az.from_pymc3(m_6_3_trace)
az.summary(idata_m_6_3, round_to=2)
# kcal.per.g regressed on perc.lactose
with pm.Model() as m_6_4:
a = pm.Normal("a", 0, 0.2)
bL = pm.Normal("bF", 0, 0.5)
mu = a + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_4_trace = pm.sample()
idata_m_6_4 = az.from_pymc3(m_6_4_trace)
az.summary(idata_m_6_4, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1843.00draws/s]
###Markdown
Code 6.10
###Code
with pm.Model() as m_6_5:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
bL = pm.Normal("bL", 0, 0.5)
mu = a + bF * d.F + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_5_trace = pm.sample()
idata_m_6_5 = az.from_pymc3(m_6_5_trace)
az.summary(idata_m_6_5, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bL, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 927.51draws/s]
###Markdown
Code 6.11
###Code
sns.pairplot(d.loc[:, ["kcal.per.g", "perc.fat", "perc.lactose"]]);
###Output
_____no_output_____
###Markdown
Code 6.12
###Code
def mv(x, a, b, c):
return a + x[0] * b + x[1] * c
def sim_coll(r=0.9):
x = np.random.normal(loc=r * d["perc.fat"], scale=np.sqrt((1 - r**2) * np.var(d["perc.fat"])))
_, cov = curve_fit(mv, (d["perc.fat"], x), d["kcal.per.g"])
return np.sqrt(np.diag(cov))[-1]
def rep_sim_coll(r=0.9, n=100):
return np.mean([sim_coll(r) for i in range(n)])
r_seq = np.arange(0, 1, 0.01)
stdev = list(map(rep_sim_coll, r_seq))
plt.scatter(r_seq, stdev)
plt.xlabel("correlation")
plt.ylabel("standard deviation of slope");
###Output
_____no_output_____
###Markdown
Code 6.13
###Code
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.14
###Code
sim_p = np.random.lognormal(0, 0.25, int(1e4))
az.summary(sim_p, kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.15
###Code
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample()
az.summary(m_6_6_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, p]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:00<00:00, 2025.34draws/s]
###Markdown
Code 6.16
###Code
with pm.Model() as m_6_7:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample()
az.summary(m_6_7_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bf, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1125.58draws/s]
The acceptance probability does not match the target. It is 0.8936683270553085, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.17
###Code
with pm.Model() as m_6_8:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample()
az.summary(m_6_8_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1494.36draws/s]
The acceptance probability does not match the target. It is 0.8808811481735465, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.18Using [`causalgraphicalmodels`](https://github.com/ijmbarr/causalgraphicalmodels) for graph drawing and analysis instead of `dagitty`, following the example of [ksachdeva's Tensorflow version of Rethinking](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
import daft
from causalgraphicalmodels import CausalGraphicalModel
plant_dag = CausalGraphicalModel(
nodes=["H0", "H1", "F", "T"], edges=[("H0", "H1"), ("F", "H1"), ("T", "F")]
)
pgm = daft.PGM()
coordinates = {"H0": (0, 0), "T": (4, 0), "F": (3, 0), "H1": (2, 0)}
for node in plant_dag.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in plant_dag.dag.edges:
pgm.add_edge(*edge)
pgm.render()
plt.gca().invert_yaxis()
###Output
_____no_output_____
###Markdown
Code 6.19Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = plant_dag.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
###Output
('T', 'H1', {'F'})
('T', 'H0', set())
('H0', 'F', set())
###Markdown
Code 6.20
###Code
N = 1000
h0 = np.random.normal(10, 2, N)
treatment = np.repeat([0, 1], N / 2)
M = np.random.binomial(1, 0.5, size=N) # assumed probability 0.5 here, as not given in book
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4 + 0.4 * M, size=N)
h1 = h0 + np.random.normal(5 + 3 * M, size=N)
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Re-run m_6_6 and m_6_7 on this dataset Code 6.21Including a python implementation of the sim_happiness function
###Code
def inv_logit(x):
return np.exp(x) / (1 + np.exp(x))
def sim_happiness(N_years=100, seed=1234):
np.random.seed(seed)
popn = pd.DataFrame(np.zeros((20 * 65, 3)), columns=["age", "happiness", "married"])
popn.loc[:, "age"] = np.repeat(np.arange(65), 20)
popn.loc[:, "happiness"] = np.repeat(np.linspace(-2, 2, 20), 65)
popn.loc[:, "married"] = np.array(popn.loc[:, "married"].values, dtype="bool")
for i in range(N_years):
# age population
popn.loc[:, "age"] += 1
# replace old folk with new folk
ind = popn.age == 65
popn.loc[ind, "age"] = 0
popn.loc[ind, "married"] = False
popn.loc[ind, "happiness"] = np.linspace(-2, 2, 20)
# do the work
elligible = (popn.married == 0) & (popn.age >= 18)
marry = np.random.binomial(1, inv_logit(popn.loc[elligible, "happiness"] - 4)) == 1
popn.loc[elligible, "married"] = marry
popn.sort_values("age", inplace=True, ignore_index=True)
return popn
popn = sim_happiness()
popn_summ = popn.copy()
popn_summ["married"] = popn_summ["married"].astype(
int
) # this is necessary before using az.summary, which doesn't work with boolean columns.
az.summary(popn_summ.to_dict(orient="list"), kind="stats", round_to=2)
# Figure 6.4
fig, ax = plt.subplots(figsize=[10, 3.4])
colors = np.array(["w"] * popn.shape[0])
colors[popn.married] = "b"
ax.scatter(popn.age, popn.happiness, edgecolor="k", color=colors)
ax.scatter([], [], edgecolor="k", color="w", label="unmarried")
ax.scatter([], [], edgecolor="k", color="b", label="married")
ax.legend(loc="upper left", framealpha=1, frameon=True)
ax.set_xlabel("age")
ax.set_ylabel("hapiness");
###Output
_____no_output_____
###Markdown
Code 6.22
###Code
adults = popn.loc[popn.age > 17]
adults.loc[:, "A"] = (adults["age"].copy() - 18) / (65 - 18)
###Output
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:845: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[key] = _infer_fill_value(value)
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item] = s
###Markdown
Code 6.23
###Code
mid = pd.Categorical(adults.loc[:, "married"].astype(int))
with pm.Model() as m_6_9:
a = pm.Normal("a", 0, 1, shape=2)
bA = pm.Normal("bA", 0, 2)
mu = a[mid] + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
m_6_9_trace = pm.sample(1000)
az.summary(m_6_9_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:03<00:00, 811.62draws/s]
###Markdown
Code 6.24
###Code
with pm.Model() as m6_10:
a = pm.Normal("a", 0, 1)
bA = pm.Normal("bA", 0, 2)
mu = a + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
trace_6_10 = pm.sample(1000)
az.summary(trace_6_10, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:02<00:00, 1397.67draws/s]
###Markdown
Code 6.25
###Code
N = 200 # number of of grandparent-parent-child triads
b_GP = 1 # direct effect of G on P
b_GC = 0 # direct effect of G on C
b_PC = 1 # direct effect of P on C
b_U = 2 # direct effect of U on P and C
###Output
_____no_output_____
###Markdown
Code 6.26
###Code
U = 2 * np.random.binomial(1, 0.5, N) - 1
G = np.random.normal(size=N)
P = np.random.normal(b_GP * G + b_U * U)
C = np.random.normal(b_PC * P + b_GC * G + b_U * U)
d = pd.DataFrame.from_dict({"C": C, "P": P, "G": G, "U": U})
# Figure 6.5
# grandparent education
bad = U < 0
good = ~bad
plt.scatter(G[good], C[good], color="w", lw=1, edgecolor="C0")
plt.scatter(G[bad], C[bad], color="w", lw=1, edgecolor="k")
# parents with similar education
eP = (P > -1) & (P < 1)
plt.scatter(G[good & eP], C[good & eP], color="C0", lw=1, edgecolor="C0")
plt.scatter(G[bad & eP], C[bad & eP], color="k", lw=1, edgecolor="k")
p = np.polyfit(G[eP], C[eP], 1)
xn = np.array([-2, 3])
plt.plot(xn, np.polyval(p, xn))
plt.xlabel("grandparent education (G)")
plt.ylabel("grandchild education (C)")
###Output
_____no_output_____
###Markdown
Code 6.27
###Code
with pm.Model() as m_6_11:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_11_trace = pm.sample()
az.summary(m_6_11_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1373.03draws/s]
###Markdown
Code 6.28
###Code
with pm.Model() as m_6_12:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
p_U = pm.Normal("b_U", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G + p_U * d.U
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_12_trace = pm.sample()
az.summary(m_6_12_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_U, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 713.85draws/s]
###Markdown
Code 6.29Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_1 = CausalGraphicalModel(
nodes=["X", "Y", "C", "U", "B", "A"],
edges=[
("X", "Y"),
("U", "X"),
("A", "U"),
("A", "C"),
("C", "Y"),
("U", "B"),
("C", "B"),
],
)
all_adjustment_sets = dag_6_1.get_all_backdoor_adjustment_sets("X", "Y")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
if s != {"U"}:
print(s)
###Output
frozenset({'A'})
frozenset({'C'})
###Markdown
Code 6.30Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_2 = CausalGraphicalModel(
nodes=["S", "A", "D", "M", "W"],
edges=[
("S", "A"),
("A", "D"),
("S", "M"),
("M", "D"),
("S", "W"),
("W", "D"),
("A", "M"),
],
)
all_adjustment_sets = dag_6_2.get_all_backdoor_adjustment_sets("W", "D")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
print(s)
###Output
frozenset({'S'})
frozenset({'M', 'A'})
###Markdown
Code 6.31Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = dag_6_2.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
seaborn 0.10.1
numpy 1.18.1
arviz 0.7.0
pandas 1.0.3
daft 0.1.0
pymc 3.8
last updated: Sun May 10 2020
CPython 3.7.6
IPython 7.13.0
watermark 2.0.2
###Markdown
Chapter 6
###Code
import warnings
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from scipy import stats
from scipy.optimize import curve_fit
warnings.simplefilter(action="ignore", category=FutureWarning)
%config Inline.figure_format = 'retina'
az.style.use("arviz-darkgrid")
az.rcParams["stats.credible_interval"] = 0.89 # sets default credible interval used by arviz
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Code 6.1
###Code
np.random.seed(3)
N = 200 # num grant proposals
p = 0.1 # proportion to select
# uncorrelated newsworthiness and trustworthiness
nw = np.random.normal(size=N)
tw = np.random.normal(size=N)
# select top 10% of combined scores
s = nw + tw # total score
q = np.quantile(s, 1 - p) # top 10% threshold
selected = s >= q
cor = np.corrcoef(tw[selected], nw[selected])
cor
# Figure 6.1
plt.scatter(nw[~selected], tw[~selected], lw=1, edgecolor="k", color=(0, 0, 0, 0))
plt.scatter(nw[selected], tw[selected], color="C0")
plt.text(0.8, 2.5, "selected", color="C0")
# correlation line
xn = np.array([-2, 3])
plt.plot(xn, tw[selected].mean() + cor[0, 1] * (xn - nw[selected].mean()))
plt.xlabel("newsworthiness")
plt.ylabel("trustworthiness")
###Output
_____no_output_____
###Markdown
Code 6.2
###Code
N = 100 # number of individuals
height = np.random.normal(10, 2, N) # sim total height of each
leg_prop = np.random.uniform(0.4, 0.5, N) # leg as proportion of height
leg_left = leg_prop * height + np.random.normal(0, 0.02, N) # sim left leg as proportion + error
leg_right = leg_prop * height + np.random.normal(0, 0.02, N) # sim right leg as proportion + error
d = pd.DataFrame(
np.vstack([height, leg_left, leg_right]).T,
columns=["height", "leg_left", "leg_right"],
) # combine into data frame
d.head()
###Output
_____no_output_____
###Markdown
Code 6.3
###Code
with pm.Model() as m_6_1:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
br = pm.Normal("br", 2, 10)
mu = a + bl * d.leg_left + br * d.leg_right
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_1_trace = pm.sample()
idata_6_1 = az.from_pymc3(m_6_1_trace) # create an arviz InferenceData object from the trace.
# this happens automatically when calling az.summary, but as we'll be using this trace multiple
# times below it's more efficient to do the conversion once at the start.
az.summary(idata_6_1, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, br, bl, a]
Sampling 2 chains, 1 divergences: 100%|██████████| 2000/2000 [01:02<00:00, 32.23draws/s]
###Markdown
Code 6.4
###Code
_ = az.plot_forest(m_6_1_trace, var_names=["~mu"], combined=True, figsize=[5, 2])
###Output
_____no_output_____
###Markdown
Code 6.5 & 6.6Because we used MCMC (c.f. `quap`), the posterior samples are already in `m_6_1_trace`.
###Code
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=[7, 3])
# code 6.5
ax1.scatter(m_6_1_trace[br], m_6_1_trace[bl], alpha=0.05, s=20)
ax1.set_xlabel("br")
ax1.set_ylabel("bl")
# code 6.6
az.plot_kde(m_6_1_trace[br] + m_6_1_trace[bl], ax=ax2)
ax2.set_ylabel("Density")
ax2.set_xlabel("sum of bl and br");
###Output
_____no_output_____
###Markdown
Code 6.7
###Code
with pm.Model() as m_6_2:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
mu = a + bl * d.leg_left
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_2_trace = pm.sample()
idata_m_6_2 = az.from_pymc3(m_6_2_trace)
az.summary(idata_m_6_2, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bl, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 766.84draws/s]
###Markdown
Code 6.8
###Code
d = pd.read_csv("Data/milk.csv", sep=";")
def standardise(series):
"""Standardize a pandas series"""
return (series - series.mean()) / series.std()
d.loc[:, "K"] = standardise(d["kcal.per.g"])
d.loc[:, "F"] = standardise(d["perc.fat"])
d.loc[:, "L"] = standardise(d["perc.lactose"])
d.head()
###Output
_____no_output_____
###Markdown
Code 6.9
###Code
# kcal.per.g regressed on perc.fat
with pm.Model() as m_6_3:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
mu = a + bF * d.F
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_3_trace = pm.sample()
idata_m_6_3 = az.from_pymc3(m_6_3_trace)
az.summary(idata_m_6_3, round_to=2)
# kcal.per.g regressed on perc.lactose
with pm.Model() as m_6_4:
a = pm.Normal("a", 0, 0.2)
bL = pm.Normal("bF", 0, 0.5)
mu = a + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_4_trace = pm.sample()
idata_m_6_4 = az.from_pymc3(m_6_4_trace)
az.summary(idata_m_6_4, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1843.00draws/s]
###Markdown
Code 6.10
###Code
with pm.Model() as m_6_5:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
bL = pm.Normal("bL", 0, 0.5)
mu = a + bF * d.F + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_5_trace = pm.sample()
idata_m_6_5 = az.from_pymc3(m_6_5_trace)
az.summary(idata_m_6_5, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bL, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 927.51draws/s]
###Markdown
Code 6.11
###Code
sns.pairplot(d.loc[:, ["kcal.per.g", "perc.fat", "perc.lactose"]]);
###Output
_____no_output_____
###Markdown
Code 6.12
###Code
def mv(x, a, b, c):
return a + x[0] * b + x[1] * c
def sim_coll(r=0.9):
x = np.random.normal(loc=r * d["perc.fat"], scale=np.sqrt((1 - r ** 2) * np.var(d["perc.fat"])))
_, cov = curve_fit(mv, (d["perc.fat"], x), d["kcal.per.g"])
return np.sqrt(np.diag(cov))[-1]
def rep_sim_coll(r=0.9, n=100):
return np.mean([sim_coll(r) for i in range(n)])
r_seq = np.arange(0, 1, 0.01)
stdev = list(map(rep_sim_coll, r_seq))
plt.scatter(r_seq, stdev)
plt.xlabel("correlation")
plt.ylabel("standard deviation of slope");
###Output
_____no_output_____
###Markdown
Code 6.13
###Code
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.14
###Code
sim_p = np.random.lognormal(0, 0.25, int(1e4))
az.summary(sim_p, kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.15
###Code
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample()
az.summary(m_6_6_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, p]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:00<00:00, 2025.34draws/s]
###Markdown
Code 6.16
###Code
with pm.Model() as m_6_7:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample()
az.summary(m_6_7_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bf, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1125.58draws/s]
The acceptance probability does not match the target. It is 0.8936683270553085, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.17
###Code
with pm.Model() as m_6_8:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample()
az.summary(m_6_8_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1494.36draws/s]
The acceptance probability does not match the target. It is 0.8808811481735465, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.18Using [`causalgraphicalmodels`](https://github.com/ijmbarr/causalgraphicalmodels) for graph drawing and analysis instead of `dagitty`, following the example of [ksachdeva's Tensorflow version of Rethinking](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
import daft
from causalgraphicalmodels import CausalGraphicalModel
plant_dag = CausalGraphicalModel(
nodes=["H0", "H1", "F", "T"], edges=[("H0", "H1"), ("F", "H1"), ("T", "F")]
)
pgm = daft.PGM()
coordinates = {"H0": (0, 0), "T": (4, 0), "F": (3, 0), "H1": (2, 0)}
for node in plant_dag.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in plant_dag.dag.edges:
pgm.add_edge(*edge)
pgm.render()
plt.gca().invert_yaxis()
###Output
_____no_output_____
###Markdown
Code 6.19Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = plant_dag.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
###Output
('T', 'H1', {'F'})
('T', 'H0', set())
('H0', 'F', set())
###Markdown
Code 6.20
###Code
N = 1000
h0 = np.random.normal(10, 2, N)
treatment = np.repeat([0, 1], N / 2)
M = np.random.binomial(1, 0.5, size=N) # assumed probability 0.5 here, as not given in book
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4 + 0.4 * M, size=N)
h1 = h0 + np.random.normal(5 + 3 * M, size=N)
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Re-run m_6_6 and m_6_7 on this dataset Code 6.21Including a python implementation of the sim_happiness function
###Code
def inv_logit(x):
return np.exp(x) / (1 + np.exp(x))
def sim_happiness(N_years=100, seed=1234):
np.random.seed(seed)
popn = pd.DataFrame(np.zeros((20 * 65, 3)), columns=["age", "happiness", "married"])
popn.loc[:, "age"] = np.repeat(np.arange(65), 20)
popn.loc[:, "happiness"] = np.repeat(np.linspace(-2, 2, 20), 65)
popn.loc[:, "married"] = np.array(popn.loc[:, "married"].values, dtype="bool")
for i in range(N_years):
# age population
popn.loc[:, "age"] += 1
# replace old folk with new folk
ind = popn.age == 65
popn.loc[ind, "age"] = 0
popn.loc[ind, "married"] = False
popn.loc[ind, "happiness"] = np.linspace(-2, 2, 20)
# do the work
elligible = (popn.married == 0) & (popn.age >= 18)
marry = np.random.binomial(1, inv_logit(popn.loc[elligible, "happiness"] - 4)) == 1
popn.loc[elligible, "married"] = marry
popn.sort_values("age", inplace=True, ignore_index=True)
return popn
popn = sim_happiness()
popn_summ = popn.copy()
popn_summ["married"] = popn_summ["married"].astype(
int
) # this is necessary before using az.summary, which doesn't work with boolean columns.
az.summary(popn_summ.to_dict(orient="list"), kind="stats", round_to=2)
# Figure 6.4
fig, ax = plt.subplots(figsize=[10, 3.4])
colors = np.array(["w"] * popn.shape[0])
colors[popn.married] = "b"
ax.scatter(popn.age, popn.happiness, edgecolor="k", color=colors)
ax.scatter([], [], edgecolor="k", color="w", label="unmarried")
ax.scatter([], [], edgecolor="k", color="b", label="married")
ax.legend(loc="upper left", framealpha=1, frameon=True)
ax.set_xlabel("age")
ax.set_ylabel("hapiness");
###Output
_____no_output_____
###Markdown
Code 6.22
###Code
adults = popn.loc[popn.age > 17]
adults.loc[:, "A"] = (adults["age"].copy() - 18) / (65 - 18)
###Output
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:845: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[key] = _infer_fill_value(value)
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item] = s
###Markdown
Code 6.23
###Code
mid = pd.Categorical(adults.loc[:, "married"].astype(int))
with pm.Model() as m_6_9:
a = pm.Normal("a", 0, 1, shape=2)
bA = pm.Normal("bA", 0, 2)
mu = a[mid] + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
m_6_9_trace = pm.sample(1000)
az.summary(m_6_9_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:03<00:00, 811.62draws/s]
###Markdown
Code 6.24
###Code
with pm.Model() as m6_10:
a = pm.Normal("a", 0, 1)
bA = pm.Normal("bA", 0, 2)
mu = a + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
trace_6_10 = pm.sample(1000)
az.summary(trace_6_10, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:02<00:00, 1397.67draws/s]
###Markdown
Code 6.25
###Code
N = 200 # number of of grandparent-parent-child triads
b_GP = 1 # direct effect of G on P
b_GC = 0 # direct effect of G on C
b_PC = 1 # direct effect of P on C
b_U = 2 # direct effect of U on P and C
###Output
_____no_output_____
###Markdown
Code 6.26
###Code
U = 2 * np.random.binomial(1, 0.5, N) - 1
G = np.random.normal(size=N)
P = np.random.normal(b_GP * G + b_U * U)
C = np.random.normal(b_PC * P + b_GC * G + b_U * U)
d = pd.DataFrame.from_dict({"C": C, "P": P, "G": G, "U": U})
# Figure 6.5
# grandparent education
bad = U < 0
good = ~bad
plt.scatter(G[good], C[good], color="w", lw=1, edgecolor="C0")
plt.scatter(G[bad], C[bad], color="w", lw=1, edgecolor="k")
# parents with similar education
eP = (P > -1) & (P < 1)
plt.scatter(G[good & eP], C[good & eP], color="C0", lw=1, edgecolor="C0")
plt.scatter(G[bad & eP], C[bad & eP], color="k", lw=1, edgecolor="k")
p = np.polyfit(G[eP], C[eP], 1)
xn = np.array([-2, 3])
plt.plot(xn, np.polyval(p, xn))
plt.xlabel("grandparent education (G)")
plt.ylabel("grandchild education (C)")
###Output
_____no_output_____
###Markdown
Code 6.27
###Code
with pm.Model() as m_6_11:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_11_trace = pm.sample()
az.summary(m_6_11_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1373.03draws/s]
###Markdown
Code 6.28
###Code
with pm.Model() as m_6_12:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
p_U = pm.Normal("b_U", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G + p_U * d.U
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_12_trace = pm.sample()
az.summary(m_6_12_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_U, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 713.85draws/s]
###Markdown
Code 6.29Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_1 = CausalGraphicalModel(
nodes=["X", "Y", "C", "U", "B", "A"],
edges=[
("X", "Y"),
("U", "X"),
("A", "U"),
("A", "C"),
("C", "Y"),
("U", "B"),
("C", "B"),
],
)
all_adjustment_sets = dag_6_1.get_all_backdoor_adjustment_sets("X", "Y")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
if s != {"U"}:
print(s)
###Output
frozenset({'A'})
frozenset({'C'})
###Markdown
Code 6.30Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_2 = CausalGraphicalModel(
nodes=["S", "A", "D", "M", "W"],
edges=[
("S", "A"),
("A", "D"),
("S", "M"),
("M", "D"),
("S", "W"),
("W", "D"),
("A", "M"),
],
)
all_adjustment_sets = dag_6_2.get_all_backdoor_adjustment_sets("W", "D")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
print(s)
###Output
frozenset({'S'})
frozenset({'M', 'A'})
###Markdown
Code 6.31Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = dag_6_2.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
seaborn 0.10.1
numpy 1.18.1
arviz 0.7.0
pandas 1.0.3
daft 0.1.0
pymc3 3.8
last updated: Sun May 10 2020
CPython 3.7.6
IPython 7.13.0
watermark 2.0.2
###Markdown
Chapter 6
###Code
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from scipy import stats
from scipy.optimize import curve_fit
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
%config Inline.figure_format = 'retina'
az.style.use('arviz-darkgrid')
#az.rcParams["stats.credible_interval"] = 0.89 # sets default credible interval used by arviz
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Code 6.1
###Code
np.random.seed(3)
N = 200 # num grant proposals
p = 0.1 # proportion to select
# uncorrelated newsworthiness and trustworthiness
nw = np.random.normal(size=N)
tw = np.random.normal(size=N)
# select top 10% of combined scores
s = nw + tw # total score
q = np.quantile(s, 1 - p) # top 10% threshold
selected = s >= q
cor = np.corrcoef(tw[selected], nw[selected])
cor
# Figure 6.1
plt.scatter(nw[~selected], tw[~selected], lw=1, edgecolor="k", color=(0, 0, 0, 0))
plt.scatter(nw[selected], tw[selected], color="C0")
plt.text(0.8, 2.5, "selected", color="C0")
# correlation line
xn = np.array([-2, 3])
plt.plot(xn, tw[selected].mean() + cor[0, 1] * (xn - nw[selected].mean()))
plt.xlabel("newsworthiness")
plt.ylabel("trustworthiness")
###Output
_____no_output_____
###Markdown
Code 6.2
###Code
N = 100 # number of individuals
height = np.random.normal(10, 2, N) # sim total height of each
leg_prop = np.random.uniform(0.4, 0.5, N) # leg as proportion of height
leg_left = leg_prop * height + np.random.normal(
0, 0.02, N
) # sim left leg as proportion + error
leg_right = leg_prop * height + np.random.normal(
0, 0.02, N
) # sim right leg as proportion + error
d = pd.DataFrame(
np.vstack([height, leg_left, leg_right]).T,
columns=["height", "leg_left", "leg_right"],
) # combine into data frame
d.head()
###Output
_____no_output_____
###Markdown
Code 6.3
###Code
with pm.Model() as m_6_1:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
br = pm.Normal("br", 2, 10)
mu = a + bl * d.leg_left + br * d.leg_right
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_1_trace = pm.sample()
idata_6_1 = az.from_pymc3(
m_6_1_trace
) # create an arviz InferenceData object from the trace.
# this happens automatically when calling az.summary, but as we'll be using this trace multiple
# times below it's more efficient to do the conversion once at the start.
az.summary(idata_6_1, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, br, bl, a]
###Markdown
Code 6.4
###Code
_ = az.plot_forest(m_6_1_trace, var_names=["~mu"], combined=True, figsize=[5, 2])
###Output
/home/jeroen/miniconda3/envs/stat-rethink2-pymc3/lib/python3.8/site-packages/arviz/utils.py:121: UserWarning: Items starting with ~: ['mu'] have not been found and will be ignored
warnings.warn(
###Markdown
Code 6.5 & 6.6Because we used MCMC (c.f. `quap`), the posterior samples are already in `m_6_1_trace`.
###Code
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=[7, 3])
# code 6.5
ax1.scatter(m_6_1_trace[br], m_6_1_trace[bl], alpha=0.05, s=20)
ax1.set_xlabel("br")
ax1.set_ylabel("bl")
# code 6.6
az.plot_kde(m_6_1_trace[br] + m_6_1_trace[bl], ax=ax2)
ax2.set_ylabel("Density")
ax2.set_xlabel("sum of bl and br");
###Output
_____no_output_____
###Markdown
Code 6.7
###Code
with pm.Model() as m_6_2:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
mu = a + bl * d.leg_left
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_2_trace = pm.sample()
idata_m_6_2 = az.from_pymc3(m_6_2_trace)
az.summary(idata_m_6_2, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, bl, a]
###Markdown
Code 6.8
###Code
d = pd.read_csv("Data/milk.csv", sep=";")
def standardise(series):
"""Standardize a pandas series"""
return (series - series.mean()) / series.std()
d.loc[:, "K"] = standardise(d["kcal.per.g"])
d.loc[:, "F"] = standardise(d["perc.fat"])
d.loc[:, "L"] = standardise(d["perc.lactose"])
d.head()
###Output
_____no_output_____
###Markdown
Code 6.9
###Code
# kcal.per.g regressed on perc.fat
with pm.Model() as m_6_3:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
mu = a + bF * d.F
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_3_trace = pm.sample()
idata_m_6_3 = az.from_pymc3(m_6_3_trace)
az.summary(idata_m_6_3, round_to=2)
# kcal.per.g regressed on perc.lactose
with pm.Model() as m_6_4:
a = pm.Normal("a", 0, 0.2)
bL = pm.Normal("bF", 0, 0.5)
mu = a + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_4_trace = pm.sample()
idata_m_6_4 = az.from_pymc3(m_6_4_trace)
az.summary(idata_m_6_4, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, bF, a]
###Markdown
Code 6.10
###Code
with pm.Model() as m_6_5:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
bL = pm.Normal("bL", 0, 0.5)
mu = a + bF * d.F + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_5_trace = pm.sample()
idata_m_6_5 = az.from_pymc3(m_6_5_trace)
az.summary(idata_m_6_5, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, bL, bF, a]
###Markdown
Code 6.11
###Code
sns.pairplot(d.loc[:, ["kcal.per.g", "perc.fat", "perc.lactose"]]);
###Output
/home/jeroen/miniconda3/envs/stat-rethink2-pymc3/lib/python3.8/site-packages/seaborn/axisgrid.py:1342: UserWarning: This figure was using constrained_layout==True, but that is incompatible with subplots_adjust and or tight_layout: setting constrained_layout==False.
fig.tight_layout(pad=layout_pad)
###Markdown
Code 6.12
###Code
def mv(x, a, b, c):
return a + x[0] * b + x[1] * c
def sim_coll(r=0.9):
x = np.random.normal(
loc=r * d["perc.fat"], scale=np.sqrt((1 - r ** 2) * np.var(d["perc.fat"]))
)
_, cov = curve_fit(mv, (d["perc.fat"], x), d["kcal.per.g"])
return np.sqrt(np.diag(cov))[-1]
def rep_sim_coll(r=0.9, n=100):
return np.mean([sim_coll(r) for i in range(n)])
r_seq = np.arange(0, 1, 0.01)
stdev = list(map(rep_sim_coll, r_seq))
plt.scatter(r_seq, stdev)
plt.xlabel("correlation")
plt.ylabel("standard deviation of slope");
###Output
_____no_output_____
###Markdown
Code 6.13
###Code
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict(
{"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus}
)
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.14
###Code
sim_p = np.random.lognormal(0, 0.25, int(1e4))
az.summary(sim_p, kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.15
###Code
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample()
az.summary(m_6_6_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, p]
###Markdown
Code 6.16
###Code
with pm.Model() as m_6_7:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample()
az.summary(m_6_7_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, bf, bt, a]
###Markdown
Code 6.17
###Code
with pm.Model() as m_6_8:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample()
az.summary(m_6_8_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, bt, a]
###Markdown
Code 6.18Using [`causalgraphicalmodels`](https://github.com/ijmbarr/causalgraphicalmodels) for graph drawing and analysis instead of `dagitty`, following the example of [ksachdeva's Tensorflow version of Rethinking](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
import daft
from causalgraphicalmodels import CausalGraphicalModel
plant_dag = CausalGraphicalModel(
nodes=["H0", "H1", "F", "T"], edges=[("H0", "H1"), ("F", "H1"), ("T", "F")]
)
pgm = daft.PGM()
coordinates = {"H0": (0, 0), "T": (4, 0), "F": (3, 0), "H1": (2, 0)}
for node in plant_dag.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in plant_dag.dag.edges:
pgm.add_edge(*edge)
pgm.render()
plt.gca().invert_yaxis()
###Output
/home/jeroen/miniconda3/envs/stat-rethink2-pymc3/lib/python3.8/site-packages/IPython/core/pylabtools.py:132: UserWarning: Calling figure.constrained_layout, but figure not setup to do constrained layout. You either called GridSpec without the fig keyword, you are using plt.subplot, or you need to call figure or subplots with the constrained_layout=True kwarg.
fig.canvas.print_figure(bytes_io, **kw)
###Markdown
Code 6.19Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = plant_dag.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
###Output
('H0', 'T', set())
('H0', 'F', set())
('T', 'H1', {'F'})
###Markdown
Code 6.20
###Code
N = 1000
h0 = np.random.normal(10, 2, N)
treatment = np.repeat([0, 1], N / 2)
M = np.random.binomial(
1, 0.5, size=N
) # assumed probability 0.5 here, as not given in book
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4 + 0.4 * M, size=N)
h1 = h0 + np.random.normal(5 + 3 * M, size=N)
d = pd.DataFrame.from_dict(
{"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus}
)
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Re-run m_6_6 and m_6_7 on this dataset Code 6.21Including a python implementation of the sim_happiness function
###Code
def inv_logit(x):
return np.exp(x) / (1 + np.exp(x))
def sim_happiness(N_years=100, seed=1234):
np.random.seed(seed)
popn = pd.DataFrame(np.zeros((20 * 65, 3)), columns=["age", "happiness", "married"])
popn.loc[:, "age"] = np.repeat(np.arange(65), 20)
popn.loc[:, "happiness"] = np.repeat(np.linspace(-2, 2, 20), 65)
popn.loc[:, "married"] = np.array(popn.loc[:, "married"].values, dtype="bool")
for i in range(N_years):
# age population
popn.loc[:, "age"] += 1
# replace old folk with new folk
ind = popn.age == 65
popn.loc[ind, "age"] = 0
popn.loc[ind, "married"] = False
popn.loc[ind, "happiness"] = np.linspace(-2, 2, 20)
# do the work
elligible = (popn.married == 0) & (popn.age >= 18)
marry = (
np.random.binomial(1, inv_logit(popn.loc[elligible, "happiness"] - 4)) == 1
)
popn.loc[elligible, "married"] = marry
popn.sort_values("age", inplace=True, ignore_index=True)
return popn
popn = sim_happiness()
popn_summ = popn.copy()
popn_summ["married"] = popn_summ["married"].astype(
int
) # this is necessary before using az.summary, which doesn't work with boolean columns.
az.summary(popn_summ.to_dict(orient="list"), kind="stats", round_to=2)
# Figure 6.4
fig, ax = plt.subplots(figsize=[10, 3.4])
colors = np.array(["w"] * popn.shape[0])
colors[popn.married] = "b"
ax.scatter(popn.age, popn.happiness, edgecolor="k", color=colors)
ax.scatter([], [], edgecolor="k", color="w", label="unmarried")
ax.scatter([], [], edgecolor="k", color="b", label="married")
ax.legend(loc="upper left", framealpha=1, frameon=True)
ax.set_xlabel("age")
ax.set_ylabel("hapiness");
###Output
_____no_output_____
###Markdown
Code 6.22
###Code
adults = popn.loc[popn.age > 17]
adults.loc[:, "A"] = (adults["age"].copy() - 18) / (65 - 18)
###Output
/home/jeroen/miniconda3/envs/stat-rethink2-pymc3/lib/python3.8/site-packages/pandas/core/indexing.py:845: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[key] = _infer_fill_value(value)
/home/jeroen/miniconda3/envs/stat-rethink2-pymc3/lib/python3.8/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item] = s
###Markdown
Code 6.23
###Code
mid = pd.Categorical(adults.loc[:, "married"].astype(int))
with pm.Model() as m_6_9:
a = pm.Normal("a", 0, 1, shape=2)
bA = pm.Normal("bA", 0, 2)
mu = a[mid] + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
m_6_9_trace = pm.sample(1000)
az.summary(m_6_9_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, bA, a]
###Markdown
Code 6.24
###Code
with pm.Model() as m6_10:
a = pm.Normal("a", 0, 1)
bA = pm.Normal("bA", 0, 2)
mu = a + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
trace_6_10 = pm.sample(1000)
az.summary(trace_6_10, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, bA, a]
###Markdown
Code 6.25
###Code
N = 200 # number of of grandparent-parent-child triads
b_GP = 1 # direct effect of G on P
b_GC = 0 # direct effect of G on C
b_PC = 1 # direct effect of P on C
b_U = 2 # direct effect of U on P and C
###Output
_____no_output_____
###Markdown
Code 6.26
###Code
U = 2 * np.random.binomial(1, 0.5, N) - 1
G = np.random.normal(size=N)
P = np.random.normal(b_GP * G + b_U * U)
C = np.random.normal(b_PC * P + b_GC * G + b_U * U)
d = pd.DataFrame.from_dict({"C": C, "P": P, "G": G, "U": U})
# Figure 6.5
# grandparent education
bad = U < 0
good = ~bad
plt.scatter(G[good], C[good], color="w", lw=1, edgecolor="C0")
plt.scatter(G[bad], C[bad], color="w", lw=1, edgecolor="k")
# parents with similar education
eP = (P > -1) & (P < 1)
plt.scatter(G[good & eP], C[good & eP], color="C0", lw=1, edgecolor="C0")
plt.scatter(G[bad & eP], C[bad & eP], color="k", lw=1, edgecolor="k")
p = np.polyfit(G[eP], C[eP], 1)
xn = np.array([-2, 3])
plt.plot(xn, np.polyval(p, xn))
plt.xlabel("grandparent education (G)")
plt.ylabel("grandchild education (C)")
###Output
_____no_output_____
###Markdown
Code 6.27
###Code
with pm.Model() as m_6_11:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_11_trace = pm.sample()
az.summary(m_6_11_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, b_GC, b_PC, a]
###Markdown
Code 6.28
###Code
with pm.Model() as m_6_12:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
p_U = pm.Normal("b_U", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G + p_U * d.U
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_12_trace = pm.sample()
az.summary(m_6_12_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, b_U, b_GC, b_PC, a]
###Markdown
Code 6.29Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_1 = CausalGraphicalModel(
nodes=["X", "Y", "C", "U", "B", "A"],
edges=[
("X", "Y"),
("U", "X"),
("A", "U"),
("A", "C"),
("C", "Y"),
("U", "B"),
("C", "B"),
],
)
all_adjustment_sets = dag_6_1.get_all_backdoor_adjustment_sets("X", "Y")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
if s != {"U"}:
print(s)
dag_6_1.draw()
###Output
_____no_output_____
###Markdown
Code 6.30Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_2 = CausalGraphicalModel(
nodes=["S", "A", "D", "M", "W"],
edges=[
("S", "A"),
("A", "D"),
("S", "M"),
("M", "D"),
("S", "W"),
("W", "D"),
("A", "M"),
],
)
all_adjustment_sets = dag_6_2.get_all_backdoor_adjustment_sets("W", "D")
all_adjustment_sets
dag_6_2.draw()
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
print(s)
###Output
frozenset({'S'})
frozenset({'A', 'M'})
###Markdown
Code 6.31Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = dag_6_2.get_all_independence_relationships()
all_independencies
d
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
seaborn 0.10.1
numpy 1.18.1
arviz 0.7.0
pandas 1.0.3
daft 0.1.0
pymc3 3.8
last updated: Sun May 10 2020
CPython 3.7.6
IPython 7.13.0
watermark 2.0.2
###Markdown
Chapter 6. The Haunted DAG & The Causal Terror
###Code
import warnings
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from scipy import stats
from scipy.optimize import curve_fit
import daft
from causalgraphicalmodels import CausalGraphicalModel
warnings.simplefilter(action="ignore", category=FutureWarning)
%config Inline.figure_format = 'retina'
az.style.use("arviz-darkgrid")
# az.rcParams["stats.stats.credible_interval"] = 0.89 # sets default credible interval used by arviz
np.random.seed(0)
###Output
_____no_output_____
###Markdown
**Berkson's paradox** (*selection-distortion effect*)- Seemgly negative correlation between two variables when the selection is based on either of the two variables.- This has to do everything with multiple regression.**If multiple regression is useful to handle spurious correlations and clear up masking effects, can we just *add everything to the model*?**- No. The selecion-distortion effect can happen inside of a multiple regression.- This is because adding a variable to the equation induces **collider bias**.**The terrible things that can happen when we simply add variables to a regression w/o a clear idea of a causal model**:1. Multicollinearity2. Post-treatment bias3. Collider biasWe can try all these in a framework which can tell us what to add and not to add to the model to get valid inferences. **However, this framework doesn't give us a valid model.** -> THEN WHAT? 6.1: Simulating science distortion
###Code
np.random.seed(3)
N = 200 # num grant proposals
p = 0.1 # proportion to select
# uncorrelated newsworthiness and trustworthiness
nw = np.random.normal(size=N)
tw = np.random.normal(size=N)
# select top 10% of combined scores
s = nw + tw # total score
q = np.quantile(s, 1 - p) # top 10% threshold
selected = s >= q
cor = np.corrcoef(tw[selected], nw[selected])
cor
# Figure 6.1
plt.scatter(nw[~selected], tw[~selected], lw=1, edgecolor="k", color=(0, 0, 0, 0))
plt.scatter(nw[selected], tw[selected], color="C0")
plt.text(0.8, 2.5, "selected", color="C0")
# correlation line
xn = np.array([-2, 3])
plt.plot(xn, tw[selected].mean() + cor[0, 1] * (xn - nw[selected].mean()))
plt.xlabel("newsworthiness")
plt.ylabel("trustworthiness")
###Output
_____no_output_____
###Markdown
6.1. MulticollinearityDefinition: a very strong association between two or more predictor variables- **The raw correlation isn't what matters**.- **Association** (=conditional on the other variables in the model) matters.- The consequence: poterior distribution will seem to suggest that **none of the variables is reliablly associated with the outcome** even if all of the variables are in reality strongly associated with the outcome.- There's nothing wrong with multicollinearity itself. *Prediction will still work fine.* - But understanding the model will be undermined. 6.1.1. Multicollinear legsPredicting one's height using the length of their legs- Yes, it makes sense.- But, if we add both legs something weird happens. 6.2
###Code
N = 100 # number of individuals
height = np.random.normal(10, 2, N) # sim total height of each
leg_prop = np.random.uniform(0.4, 0.5, N) # leg as proportion of height
leg_left = leg_prop * height + np.random.normal(0, 0.02, N) # sim left leg as proportion + error
leg_right = leg_prop * height + np.random.normal(0, 0.02, N) # sim right leg as proportion + error
d = pd.DataFrame(
np.vstack([height, leg_left, leg_right]).T,
columns=["height", "leg_left", "leg_right"],
) # combine into data frame
d.head()
###Output
_____no_output_____
###Markdown
6.3: What happens if we include both legs in the model
###Code
with pm.Model() as m_6_1:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
br = pm.Normal("br", 2, 10)
mu = a + bl * d.leg_left + br * d.leg_right
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_1_trace = pm.sample()
idata_6_1 = az.from_pymc3(m_6_1_trace) # create an arviz InferenceData object from the trace.
# this happens automatically when calling az.summary, but as we'll be using this trace multiple
# times below it's more efficient to do the conversion once at the start.
az.summary(idata_6_1, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, br, bl, a]
###Markdown
6.4: Posterior means and sd look crazy.
###Code
_ = az.plot_forest(m_6_1_trace, combined=True, figsize=[5, 2])
###Output
_____no_output_____
###Markdown
6.5 & 6.6: Plotting the posterior distribution of each leg with height (left) and posterior distribution of the *sum* of the two parameters.Because we used MCMC, the posterior samples are already in `m_6_1_trace`. **Why negative correlation?** Both legs have almost exactly the same informaiton. So if we insist on including both in a model, then ther will be a practically infinite number of combinations of `bl` and `br` that produce the same predictions.
###Code
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=[7, 3])
# code 6.5
ax1.scatter(m_6_1_trace[br], m_6_1_trace[bl], alpha=0.05, s=20)
ax1.set_xlabel("br")
ax1.set_ylabel("bl")
# code 6.6
az.plot_kde(m_6_1_trace[br] + m_6_1_trace[bl], ax=ax2)
ax2.set_ylabel("Density")
ax2.set_xlabel("sum of bl and br");
###Output
_____no_output_____
###Markdown
6.7: if we only include one leg, then it looks fine.
###Code
with pm.Model() as m_6_2:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
mu = a + bl * d.leg_left
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_2_trace = pm.sample()
idata_m_6_2 = az.from_pymc3(m_6_2_trace)
az.summary(idata_m_6_2, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bl, a]
###Markdown
The mean of `bl` looks normal now.
###Code
az.plot_forest(m_6_2_trace, combined=True, figsize=[5, 2]);
###Output
_____no_output_____
###Markdown
**What does this all mean?**- When there's multicollinearity, the posterior distribution tells us that the question we asked cannot be answered with these data (highly uncertain)- **That's a great thing for a model to say, that it cannot answer the question.**- Prediction will just do fine. It just **doesn't make any claims about which leg is more important**. 6.1.2. Multicollinear milk Code 6.8: the primate milk data
###Code
d = pd.read_csv("Data/milk.csv", sep=";")
def standardise(series):
"""Standardize a pandas series"""
return (series - series.mean()) / series.std()
d.loc[:, "K"] = standardise(d["kcal.per.g"])
d.loc[:, "F"] = standardise(d["perc.fat"])
d.loc[:, "L"] = standardise(d["perc.lactose"])
d.head()
###Output
_____no_output_____
###Markdown
6.9: two models
###Code
# kcal.per.g regressed on perc.fat
with pm.Model() as m_6_3:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
mu = a + bF * d.F
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_3_trace = pm.sample()
idata_m_6_3 = az.from_pymc3(m_6_3_trace)
az.summary(idata_m_6_3, round_to=2)
# kcal.per.g regressed on perc.lactose
with pm.Model() as m_6_4:
a = pm.Normal("a", 0, 0.2)
bL = pm.Normal("bF", 0, 0.5)
mu = a + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_4_trace = pm.sample()
idata_m_6_4 = az.from_pymc3(m_6_4_trace)
az.summary(idata_m_6_4, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bF, a]
###Markdown
The posterior distribution for bF and bL are essentially mirror images of one another. 6.10: What happens when we include both
###Code
with pm.Model() as m_6_5:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
bL = pm.Normal("bL", 0, 0.5)
mu = a + bF * d.F + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_5_trace = pm.sample()
idata_m_6_5 = az.from_pymc3(m_6_5_trace)
az.summary(idata_m_6_5, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bL, bF, a]
###Markdown
Posterior means are closer to the zero. Also signs are flipped!
###Code
az.plot_forest(m_6_5_trace, combined=True, figsize=(5, 2));
###Output
_____no_output_____
###Markdown
6.11: the relationship between the 3 variables
###Code
sns.pairplot(d.loc[:, ["kcal.per.g", "perc.fat", "perc.lactose"]]);
###Output
/Users/honshi01/anaconda3/envs/stat-rethink2-pymc3/lib/python3.9/site-packages/seaborn/axisgrid.py:64: UserWarning: This figure was using constrained_layout==True, but that is incompatible with subplots_adjust and or tight_layout: setting constrained_layout==False.
self.fig.tight_layout(*args, **kwargs)
###Markdown
Essentially, knowning just one is enough and the other doesn't help. **Scientific communities rarely take causal approach on this matter.**- What about dropping highly correlated features? This is a mistake because **pairwise correlations are not the problem**. It is the ***conditional associations (not correlations)*** that matter.- Even then, the right thing to do **depends on what is causing the collinearity**. ***The association within the data alone are not enough to decide what to do.*** **Non-identifiability**- The problem of multicollinearity is a member of family of problems with model fitting, a family known as *non-identifiability*.- When a parameter is *non-identifiable*, **the structure of the data and model do not make it possible to estimate the parameter's value.**In general, there's no guarantee that data contain much information about a parameter of interest.- If this is true, the Bayesian model will return a posterior similar to the prior. So then **comparing prior and posterior** is a good idea (a way of seeing how much information the model extracted from the data).- If the two are similar, it doesn't mean the calculations are wrong; you got the right answer to the question you asked, but it might lead you to ask a better question. 6.12: Simulating collinearity
###Code
def mv(x, a, b, c):
return a + x[0] * b + x[1] * c
def sim_coll(r=0.9):
x = np.random.normal(loc=r * d["perc.fat"], scale=np.sqrt((1 - r ** 2) * np.var(d["perc.fat"])))
_, cov = curve_fit(mv, (d["perc.fat"], x), d["kcal.per.g"])
return np.sqrt(np.diag(cov))[-1]
def rep_sim_coll(r=0.9, n=100):
return np.mean([sim_coll(r) for i in range(n)])
r_seq = np.arange(0, 1, 0.01)
stdev = list(map(rep_sim_coll, r_seq))
plt.scatter(r_seq, stdev)
plt.xlabel("correlation")
plt.ylabel("standard deviation of slope");
###Output
_____no_output_____
###Markdown
6.2. Post-treatment bias - Omittied variable bias: mistaken influence from ommiting predictor variables- Included variable bias: less often than omitted but it's real. *Blindly tossing variables into the causal salad is never a good idea*. - Post-treamtment bias is one form. **Post-treatment bias**- Occurs in all types of studies.- "Post-treatment" comes from thinking about experimental design.**Example: plant growth and fungus**- Background: fungus slows down growth. Some treatment can affect fungus.- Variables: initial height, treatment, presence of fungus.- Question: **should we include fungus in the model?** - **NO if the goal is to make a causal inference about the treatment, because fungus is a post-treatment variable.** Code 6.13: What happens when we include the fungus (post-treatment) in the model?
###Code
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
# treatment, p_fungus = 0.1, otherwise, p = 0.5
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N) # basically Bernoulli process
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
d.head()
###Output
_____no_output_____
###Markdown
Note that `arviz.summary` can be directly used for a dataframe.
###Code
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
6.2.1. A prior is born.- We're interested the growth, so we can use proportion instead of the absolute difference. (p = 1, no growth).- p < 1 is possible because some plants might die.- p > 0 because it's proportion.- Thus, we can use **log normal** as prior. Code 6.14: Log normal prior for growth proportion
###Code
sim_p = np.random.lognormal(0, 0.25, int(1e4))
az.summary(sim_p, kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.15: Model w/ growth as proportion
###Code
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample()
az.summary(m_6_6_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, p]
###Markdown
On average, 40% growth. Model:- proportion = linear sum of treatment and fungus$$h_{1, i} \sim \text{Normal}(\mu_{i}, \sigma)$$$$\mu_{i} = h_{0, i}p $$$$p = \alpha + \beta_{T}T_{i} + \beta_{F}F_{i} $$$$\alpha \sim \text{Log-Normal}(0, 0.25)$$$$\beta_{T} \sim \text{Normal}(0, 0.5)$$$$\beta_{F} \sim \text{Normal}(0, 0.5)$$$$\sigma \sim \text{Exponential}(1)$$ Code 6.16: Model with both fungus and treatment
###Code
with pm.Model() as m_6_7:
a = pm.Lognormal("a", 0, 0.25)
# a = pm.Normal("a", 0, 0.2) # error?
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample()
az.summary(m_6_7_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bf, bt, a]
###Markdown
Now the summary shows that treatment has almost no effect. What's wrong? 6.2.2. Blocked by consequenceWe need to remove fungus because it's the consequence of treatment. Code 6.17: Model w/o fungus
###Code
with pm.Model() as m_6_8:
a = pm.Lognormal("a", 0, 0.25)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample()
az.summary(m_6_8_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bt, a]
###Markdown
Now the treatement effect is positive. 6.2.3. Fungus and d-separation Code 6.18Using [`causalgraphicalmodels`](https://github.com/ijmbarr/causalgraphicalmodels) for graph drawing and analysis instead of `dagitty`, following the example of [ksachdeva's Tensorflow version of Rethinking](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
plant_dag = CausalGraphicalModel(
nodes=["H0", "H1", "F", "T"], edges=[("H0", "H1"), ("F", "H1"), ("T", "F")]
)
pgm = daft.PGM()
coordinates = {"H0": (0, 0), "T": (4, 0), "F": (3, 0), "H1": (2, 0)}
for node in plant_dag.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in plant_dag.dag.edges:
pgm.add_edge(*edge)
pgm.render();
# plt.gca().invert_yaxis()
###Output
_____no_output_____
###Markdown
Another way of saying this: **"Conditioning on F induces *d-separation* (d: directional)**.- Some variables on a directed graph are independent to others.- "H1 is d-separated from T only when we condition on F." - Conditioning on F blocks the directed path T -> F -> H1, making T and H1 independent (d-separated). - This is same as $H_{1} \perp T \vert F $ Code 6.19: A handy way to query conditional independence from a DAGCredit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = plant_dag.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
###Output
('T', 'H1', {'F'})
('T', 'H0', set())
('H0', 'F', set())
###Markdown
Let's think about a differen DAG- There might be an unobserved factor (M, moist) can affect both H1 and F
###Code
plant_dag_ = CausalGraphicalModel(
nodes=["H0", "H1", 'M', "F", "T"],
edges=[("H0", "H1"), ("T", "F"), ('M', 'H1'), ("M", "F")]
)
pgm = daft.PGM()
coordinates = {"H0": (0, 0), "H1": (1, 0), "T": (3, 0), "F": (2, 0), 'M': (1.5, -1)}
for node in plant_dag_.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in plant_dag_.dag.edges:
pgm.add_edge(*edge)
pgm.render();
###Output
_____no_output_____
###Markdown
Code 6.20: Simulation where M is present and influneces both H1 and F
###Code
N = 1000
h0 = np.random.normal(10, 2, N)
treatment = np.repeat([0, 1], N / 2)
M = np.random.binomial(1, 0.5, size=N) # assumed probability 0.5 here, as not given in book
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4 + 0.4 * M, size=N)
h1 = h0 + np.random.normal(5 + 3 * M, size=N)
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Re-run m_6_6 and m_6_7 on this dataset
###Code
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample()
az.summary(m_6_6_trace, round_to=2)
with pm.Model() as m_6_7:
a = pm.Lognormal("a", 0, 0.25)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample()
az.summary(m_6_7_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bf, bt, a]
###Markdown
When we included both, and M exists in the generation process, fungus has positive effect on the growth **even though in the DAG, we know that it doesn't affect the growth**. **Rethinking: Model selection doesn't help.**- It doesn't help in this case. It's not about the poor question but more about poor question. - **No statistical procedure can substitute for scientific knowledge and attention to it**. - We need **multiple models** because they help us **understand causal paths**, not just so we can choose one or another for *prediction*. 6.3. Collider bias 6.3.1. Collider of false sorrow Example: happiness, marriage, age- H -> M <- A- Marriage is a collider. Code 6.21: a python implementation of the sim_happiness functionSimulation design:1. Each year, 20 people are born w/ uniformly distributed happiness values2. Each year, each person ages one year. Happiness does not change.3. At age 18, people can become married. The p of marriage each year are proportion to happiness.4. Once married, they remain married.5. After age 65, they leave the sample.
###Code
def inv_logit(x):
return np.exp(x) / (1 + np.exp(x))
def sim_happiness(N_years=100, seed=1234):
np.random.seed(seed)
popn = pd.DataFrame(np.zeros((20 * 65, 3)), columns=["age", "happiness", "married"])
popn.loc[:, "age"] = np.repeat(np.arange(65), 20)
popn.loc[:, "happiness"] = np.repeat(np.linspace(-2, 2, 20), 65)
popn.loc[:, "married"] = np.array(popn.loc[:, "married"].values, dtype="bool")
for i in range(N_years):
# age population
popn.loc[:, "age"] += 1
# replace old folk with new folk
ind = popn.age == 65
popn.loc[ind, "age"] = 0
popn.loc[ind, "married"] = False
popn.loc[ind, "happiness"] = np.linspace(-2, 2, 20)
# do the work
elligible = (popn.married == 0) & (popn.age >= 18)
marry = np.random.binomial(1, inv_logit(popn.loc[elligible, "happiness"] - 4)) == 1
popn.loc[elligible, "married"] = marry
popn.sort_values("age", inplace=True, ignore_index=True)
return popn
popn = sim_happiness()
popn_summ = popn.copy()
popn_summ["married"] = popn_summ["married"].astype(
int
) # this is necessary before using az.summary, which doesn't work with boolean columns.
az.summary(popn_summ.to_dict(orient="list"), kind="stats", round_to=2)
# Figure 6.4
fig, ax = plt.subplots(figsize=[10, 3.4])
colors = np.array(["w"] * popn.shape[0])
colors[popn.married] = "b"
ax.scatter(popn.age, popn.happiness, edgecolor="k", color=colors)
# the two scatters here are to create the legend
ax.scatter([], [], edgecolor="k", color="w", label="unmarried")
ax.scatter([], [], edgecolor="k", color="b", label="married")
ax.legend(loc="upper left", framealpha=1, frameon=True)
ax.set_xlabel("age")
ax.set_ylabel("hapiness");
###Output
_____no_output_____
###Markdown
If we come across this dataset and we have no idea about the causal model behind it, and think we want to measure the effect of age on happiness. We might want to condition on marriage (because I am thinking *marriage might be a confound*). Then we can have a linear model like this: $$\mu_{i}=\alpha_{MID[i]} + \beta_{A} A_{i}$$ - `MID[i]`: index for marriage status Code 6.22: rescaling the data so that the range from 18 to 65 is one unit
###Code
adults = popn.loc[popn.age > 17]
adults.loc[:, "A"] = (adults["age"].copy() - 18) / (65 - 18)
###Output
/Users/honshi01/anaconda3/envs/stat-rethink2-pymc3/lib/python3.9/site-packages/pandas/core/indexing.py:1597: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[key] = value
/Users/honshi01/anaconda3/envs/stat-rethink2-pymc3/lib/python3.9/site-packages/pandas/core/indexing.py:1676: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_single_column(ilocs[0], value, pi)
###Markdown
Code 6.23: Model w/ age and marriage- defining prior: remember that for normal prior, +-2$\sigma$ covers 95%
###Code
mid = pd.Categorical(adults.loc[:, "married"].astype(int))
with pm.Model() as m_6_9:
a = pm.Normal("a", 0, 1, shape=2)
bA = pm.Normal("bA", 0, 2)
mu = a[mid] + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
m_6_9_trace = pm.sample(1000)
az.summary(m_6_9_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
###Markdown
- bA < 0, so the model thinkgs age is negatively associated w/ happiness. Code 6.24: Model w/o marriage
###Code
with pm.Model() as m6_10:
a = pm.Normal("a", 0, 1)
bA = pm.Normal("bA", 0, 2)
mu = a + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
trace_6_10 = pm.sample(1000)
az.summary(trace_6_10, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
###Markdown
- This model thinks, there's no association between age and happiness. 6.3.2. The haunted DAG Simpson's paradox: the grandparent example**DAG 1**
###Code
dag1 = CausalGraphicalModel(
nodes=["G", "P", "C"],
edges=[("G", "P"), ("G", "C"), ('P', 'C')]
)
pgm = daft.PGM()
coordinates = {"G": (0, 0), "P": (1, 0), "C": (1, -1)}
for node in dag1.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in dag1.dag.edges:
pgm.add_edge(*edge)
pgm.show()
###Output
_____no_output_____
###Markdown
**DAG 2**: unobserved variable
###Code
dag2 = CausalGraphicalModel(
nodes=["G", "P", "C", "U"],
edges=[("G", "P"), ("G", "C"), ('P', 'C'), ('U', 'P'), ('U', "C")]
)
pgm = daft.PGM()
coordinates = {"G": (0, 0), "P": (1, 0), "C": (1, -1), "U": (2, -0.5)}
for node in dag2.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in dag2.dag.edges:
pgm.add_edge(*edge)
pgm.show()
###Output
_____no_output_____
###Markdown
Code 6.25: true coefficient for data generation process
###Code
N = 200 # number of of grandparent-parent-child triads
b_GP = 1 # direct effect of G on P
b_GC = 0 # direct effect of G on C
b_PC = 1 # direct effect of P on C
b_U = 2 # direct effect of U on P and C
###Output
_____no_output_____
###Markdown
Code 6.26: simulate the data
###Code
U = 2 * np.random.binomial(1, 0.5, N) - 1
G = np.random.normal(size=N)
P = np.random.normal(b_GP * G + b_U * U)
C = np.random.normal(b_PC * P + b_GC * G + b_U * U)
d = pd.DataFrame.from_dict({"C": C, "P": P, "G": G, "U": U})
# Figure 6.5
# grandparent education
bad = U < 0
good = ~bad
plt.scatter(G[good], C[good], color="w", lw=1, edgecolor="C0")
plt.scatter(G[bad], C[bad], color="w", lw=1, edgecolor="k")
# parents with similar education
eP = (P > -1) & (P < 1)
plt.scatter(G[good & eP], C[good & eP], color="C0", lw=1, edgecolor="C0")
plt.scatter(G[bad & eP], C[bad & eP], color="k", lw=1, edgecolor="k")
p = np.polyfit(G[eP], C[eP], 1)
xn = np.array([-2, 3])
plt.plot(xn, np.polyval(p, xn))
plt.xlabel("grandparent education (G)")
plt.ylabel("grandchild education (C)")
###Output
_____no_output_____
###Markdown
Code 6.27: Model without unobserved
###Code
with pm.Model() as m_6_11:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_11_trace = pm.sample()
az.summary(m_6_11_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_GC, b_PC, a]
###Markdown
Now it says grandparents have negative effects. Code 6.28: Model w/ unobserved included
###Code
with pm.Model() as m_6_12:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
p_U = pm.Normal("b_U", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G + p_U * d.U
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_12_trace = pm.sample()
az.summary(m_6_12_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_U, b_GC, b_PC, a]
###Markdown
**Rethinking: Statistical paradoxes and causal explanations**- Simpson's paradox: including another predictor (parents in this case) can reverse the direction of association between some other predictor (G) and the outcome (C).- Usually, Simpson's paradox is presented in cases where adding the new predictor helps us. But in this case, it misleads us.- **To know whether the reversal of the association correctly reflects causation, we need something more than just a statistical model**. 6.4. Confronting confouding 6.4.1. Shutting the backdoor= Blocking confounding paths between some predictor X and some outcome Y**The four elemental confounds**.1. The fork2. The pipe3. The collider4. **The descendant** - Conditioning on a descendant *partly* conditions on its parent. **Recipe for closing the backdoor**1. List all of the paths connecting X (the potential cause of interest) and Y (the outcome)2. Classify each path by whether it's open or closed. A path is open unless it contains a collider.3. Classify each path by whether it's a backdoor path. A backdoor path has an arrow entering X.4. If there are any open backdoor paths, decide which variables to condition on to close it (if possible). Code 6.29Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_1 = CausalGraphicalModel(
nodes=["X", "Y", "C", "U", "B", "A"],
edges=[
("X", "Y"),
("U", "X"),
("A", "U"),
("A", "C"),
("C", "Y"),
("U", "B"),
("C", "B"),
],
)
pgm = daft.PGM()
coordinates = {'X': (0, 0),
'Y': (2, 0),
'C': (2, 1),
'U': (0, 1),
'B': (1, 0.5),
'A': (1, 1.5)}
for node in dag_6_1.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in dag_6_1.dag.edges:
pgm.add_edge(*edge)
pgm.show()
all_adjustment_sets = dag_6_1.get_all_backdoor_adjustment_sets("X", "Y")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
if s != {"U"}: # U is unobserved, so we can condition on it
print(s)
###Output
frozenset({'A'})
frozenset({'C'})
###Markdown
6.4.3. Backdoor waffles Code 6.30Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_2 = CausalGraphicalModel(
nodes=["S", "A", "D", "M", "W"],
edges=[
("S", "A"),
("A", "D"),
("S", "M"),
("M", "D"),
("S", "W"),
("W", "D"),
("A", "M"),
],
)
all_adjustment_sets = dag_6_2.get_all_backdoor_adjustment_sets("W", "D")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
print(s)
###Output
frozenset({'S'})
frozenset({'M', 'A'})
###Markdown
Code 6.31: Computing conditional independent relationshipsCredit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = dag_6_2.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
###Output
('W', 'M', {'S'})
('W', 'A', {'S'})
('S', 'D', {'W', 'M', 'A'})
###Markdown
**Rethinking: DAGs are not enough.**- DAGs are not a destination. Once we have a dynamic model of our system, we don't need a DAG.- In fact, many dynamical systems have complex behavior that is sensitive to initial conditions, and so cannot be usefully represented by DAGs.- **But these models can still be analyzed and causal interventions designed from them**.: Domain specific SCMs can make causal inference possible even when a DAG w/ the same structure cannot decide how to proceed. Additional assumptions, when accurate, give us power. **Overthinking: A smooth operator**- The **do-operator**- Confounding occurs when:$$Pr(Y|X) \neq Pr(Y|do(X))$$- $do(X)$ means to cut all of thte backdoor paths into X. 6.5. Summary- Mutliple regression is no oracle, and we need additioanl information from outside the model to make sense of it.- Common frustrations: multicollinearity, post-treatment bias, collider bias.- Solutions to these can be organized under a coherent framework where **hypothetical causal relations among variables are analyzed to cope with confounding.**- Causal models exist outside the statistical model and can be difficult to test.- However, **it's possible to reach valid causal inferences in the absence of experiments.** This is good because we often can't perform experiments, both for practical and ethical reasons.
###Code
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
seaborn 0.10.1
numpy 1.18.1
arviz 0.7.0
pandas 1.0.3
daft 0.1.0
pymc3 3.8
last updated: Sun May 10 2020
CPython 3.7.6
IPython 7.13.0
watermark 2.0.2
###Markdown
Chapter 6
###Code
import warnings
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from scipy import stats
from scipy.optimize import curve_fit
warnings.simplefilter(action="ignore", category=FutureWarning)
%config Inline.figure_format = 'retina'
az.style.use('arviz-darkgrid')
az.rcParams["stats.credible_interval"] = 0.89 # sets default credible interval used by arviz
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Code 6.1
###Code
np.random.seed(3)
N = 200 # num grant proposals
p = 0.1 # proportion to select
# uncorrelated newsworthiness and trustworthiness
nw = np.random.normal(size=N)
tw = np.random.normal(size=N)
# select top 10% of combined scores
s = nw + tw # total score
q = np.quantile(s, 1 - p) # top 10% threshold
selected = s >= q
cor = np.corrcoef(tw[selected], nw[selected])
cor
# Figure 6.1
plt.scatter(nw[~selected], tw[~selected], lw=1, edgecolor="k", color=(0, 0, 0, 0))
plt.scatter(nw[selected], tw[selected], color="C0")
plt.text(0.8, 2.5, "selected", color="C0")
# correlation line
xn = np.array([-2, 3])
plt.plot(xn, tw[selected].mean() + cor[0, 1] * (xn - nw[selected].mean()))
plt.xlabel("newsworthiness")
plt.ylabel("trustworthiness")
###Output
_____no_output_____
###Markdown
Code 6.2
###Code
N = 100 # number of individuals
height = np.random.normal(10, 2, N) # sim total height of each
leg_prop = np.random.uniform(0.4, 0.5, N) # leg as proportion of height
leg_left = leg_prop * height + np.random.normal(
0, 0.02, N
) # sim left leg as proportion + error
leg_right = leg_prop * height + np.random.normal(
0, 0.02, N
) # sim right leg as proportion + error
d = pd.DataFrame(
np.vstack([height, leg_left, leg_right]).T,
columns=["height", "leg_left", "leg_right"],
) # combine into data frame
d.head()
###Output
_____no_output_____
###Markdown
Code 6.3
###Code
with pm.Model() as m_6_1:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
br = pm.Normal("br", 2, 10)
mu = a + bl * d.leg_left + br * d.leg_right
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_1_trace = pm.sample()
idata_6_1 = az.from_pymc3(
m_6_1_trace
) # create an arviz InferenceData object from the trace.
# this happens automatically when calling az.summary, but as we'll be using this trace multiple
# times below it's more efficient to do the conversion once at the start.
az.summary(idata_6_1, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, br, bl, a]
Sampling 2 chains, 1 divergences: 100%|██████████| 2000/2000 [01:02<00:00, 32.23draws/s]
###Markdown
Code 6.4
###Code
_ = az.plot_forest(m_6_1_trace, var_names=["~mu"], combined=True, figsize=[5, 2])
###Output
_____no_output_____
###Markdown
Code 6.5 & 6.6Because we used MCMC (c.f. `quap`), the posterior samples are already in `m_6_1_trace`.
###Code
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=[7, 3])
# code 6.5
ax1.scatter(m_6_1_trace[br], m_6_1_trace[bl], alpha=0.05, s=20)
ax1.set_xlabel("br")
ax1.set_ylabel("bl")
# code 6.6
az.plot_kde(m_6_1_trace[br] + m_6_1_trace[bl], ax=ax2)
ax2.set_ylabel("Density")
ax2.set_xlabel("sum of bl and br");
###Output
_____no_output_____
###Markdown
Code 6.7
###Code
with pm.Model() as m_6_2:
a = pm.Normal("a", 10, 100)
bl = pm.Normal("bl", 2, 10)
mu = a + bl * d.leg_left
sigma = pm.Exponential("sigma", 1)
height = pm.Normal("height", mu=mu, sigma=sigma, observed=d.height)
m_6_2_trace = pm.sample()
idata_m_6_2 = az.from_pymc3(m_6_2_trace)
az.summary(idata_m_6_2, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bl, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 766.84draws/s]
###Markdown
Code 6.8
###Code
d = pd.read_csv("Data/milk.csv", sep=";")
def standardise(series):
"""Standardize a pandas series"""
return (series - series.mean()) / series.std()
d.loc[:, "K"] = standardise(d["kcal.per.g"])
d.loc[:, "F"] = standardise(d["perc.fat"])
d.loc[:, "L"] = standardise(d["perc.lactose"])
d.head()
###Output
_____no_output_____
###Markdown
Code 6.9
###Code
# kcal.per.g regressed on perc.fat
with pm.Model() as m_6_3:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
mu = a + bF * d.F
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_3_trace = pm.sample()
idata_m_6_3 = az.from_pymc3(m_6_3_trace)
az.summary(idata_m_6_3, round_to=2)
# kcal.per.g regressed on perc.lactose
with pm.Model() as m_6_4:
a = pm.Normal("a", 0, 0.2)
bL = pm.Normal("bF", 0, 0.5)
mu = a + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_4_trace = pm.sample()
idata_m_6_4 = az.from_pymc3(m_6_4_trace)
az.summary(idata_m_6_4, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1843.00draws/s]
###Markdown
Code 6.10
###Code
with pm.Model() as m_6_5:
a = pm.Normal("a", 0, 0.2)
bF = pm.Normal("bF", 0, 0.5)
bL = pm.Normal("bL", 0, 0.5)
mu = a + bF * d.F + bL * d.L
sigma = pm.Exponential("sigma", 1)
K = pm.Normal("K", mu, sigma, observed=d.K)
m_6_5_trace = pm.sample()
idata_m_6_5 = az.from_pymc3(m_6_5_trace)
az.summary(idata_m_6_5, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bL, bF, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 927.51draws/s]
###Markdown
Code 6.11
###Code
sns.pairplot(d.loc[:, ["kcal.per.g", "perc.fat", "perc.lactose"]]);
###Output
_____no_output_____
###Markdown
Code 6.12
###Code
def mv(x, a, b, c):
return a + x[0] * b + x[1] * c
def sim_coll(r=0.9):
x = np.random.normal(
loc=r * d["perc.fat"], scale=np.sqrt((1 - r ** 2) * np.var(d["perc.fat"]))
)
_, cov = curve_fit(mv, (d["perc.fat"], x), d["kcal.per.g"])
return np.sqrt(np.diag(cov))[-1]
def rep_sim_coll(r=0.9, n=100):
return np.mean([sim_coll(r) for i in range(n)])
r_seq = np.arange(0, 1, 0.01)
stdev = list(map(rep_sim_coll, r_seq))
plt.scatter(r_seq, stdev)
plt.xlabel("correlation")
plt.ylabel("standard deviation of slope");
###Output
_____no_output_____
###Markdown
Code 6.13
###Code
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict(
{"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus}
)
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.14
###Code
sim_p = np.random.lognormal(0, 0.25, int(1e4))
az.summary(sim_p, kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Code 6.15
###Code
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample()
az.summary(m_6_6_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, p]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:00<00:00, 2025.34draws/s]
###Markdown
Code 6.16
###Code
with pm.Model() as m_6_7:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample()
az.summary(m_6_7_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bf, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1125.58draws/s]
The acceptance probability does not match the target. It is 0.8936683270553085, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.17
###Code
with pm.Model() as m_6_8:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = p * d.h0
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample()
az.summary(m_6_8_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bt, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1494.36draws/s]
The acceptance probability does not match the target. It is 0.8808811481735465, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.18Using [`causalgraphicalmodels`](https://github.com/ijmbarr/causalgraphicalmodels) for graph drawing and analysis instead of `dagitty`, following the example of [ksachdeva's Tensorflow version of Rethinking](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
import daft
from causalgraphicalmodels import CausalGraphicalModel
plant_dag = CausalGraphicalModel(
nodes=["H0", "H1", "F", "T"], edges=[("H0", "H1"), ("F", "H1"), ("T", "F")]
)
pgm = daft.PGM()
coordinates = {"H0": (0, 0), "T": (4, 0), "F": (3, 0), "H1": (2, 0)}
for node in plant_dag.dag.nodes:
pgm.add_node(node, node, *coordinates[node])
for edge in plant_dag.dag.edges:
pgm.add_edge(*edge)
pgm.render()
plt.gca().invert_yaxis()
###Output
_____no_output_____
###Markdown
Code 6.19Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = plant_dag.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
###Output
('T', 'H1', {'F'})
('T', 'H0', set())
('H0', 'F', set())
###Markdown
Code 6.20
###Code
N = 1000
h0 = np.random.normal(10, 2, N)
treatment = np.repeat([0, 1], N / 2)
M = np.random.binomial(
1, 0.5, size=N
) # assumed probability 0.5 here, as not given in book
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4 + 0.4 * M, size=N)
h1 = h0 + np.random.normal(5 + 3 * M, size=N)
d = pd.DataFrame.from_dict(
{"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus}
)
az.summary(d.to_dict(orient="list"), kind="stats", round_to=2)
###Output
_____no_output_____
###Markdown
Re-run m_6_6 and m_6_7 on this dataset Code 6.21Including a python implementation of the sim_happiness function
###Code
def inv_logit(x):
return np.exp(x) / (1 + np.exp(x))
def sim_happiness(N_years=100, seed=1234):
np.random.seed(seed)
popn = pd.DataFrame(np.zeros((20 * 65, 3)), columns=["age", "happiness", "married"])
popn.loc[:, "age"] = np.repeat(np.arange(65), 20)
popn.loc[:, "happiness"] = np.repeat(np.linspace(-2, 2, 20), 65)
popn.loc[:, "married"] = np.array(popn.loc[:, "married"].values, dtype="bool")
for i in range(N_years):
# age population
popn.loc[:, "age"] += 1
# replace old folk with new folk
ind = popn.age == 65
popn.loc[ind, "age"] = 0
popn.loc[ind, "married"] = False
popn.loc[ind, "happiness"] = np.linspace(-2, 2, 20)
# do the work
elligible = (popn.married == 0) & (popn.age >= 18)
marry = (
np.random.binomial(1, inv_logit(popn.loc[elligible, "happiness"] - 4)) == 1
)
popn.loc[elligible, "married"] = marry
popn.sort_values("age", inplace=True, ignore_index=True)
return popn
popn = sim_happiness()
popn_summ = popn.copy()
popn_summ["married"] = popn_summ["married"].astype(
int
) # this is necessary before using az.summary, which doesn't work with boolean columns.
az.summary(popn_summ.to_dict(orient="list"), kind="stats", round_to=2)
# Figure 6.4
fig, ax = plt.subplots(figsize=[10, 3.4])
colors = np.array(["w"] * popn.shape[0])
colors[popn.married] = "b"
ax.scatter(popn.age, popn.happiness, edgecolor="k", color=colors)
ax.scatter([], [], edgecolor="k", color="w", label="unmarried")
ax.scatter([], [], edgecolor="k", color="b", label="married")
ax.legend(loc="upper left", framealpha=1, frameon=True)
ax.set_xlabel("age")
ax.set_ylabel("hapiness");
###Output
_____no_output_____
###Markdown
Code 6.22
###Code
adults = popn.loc[popn.age > 17]
adults.loc[:, "A"] = (adults["age"].copy() - 18) / (65 - 18)
###Output
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:845: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[key] = _infer_fill_value(value)
/home/oscar/miniconda3/envs/py3/lib/python3.7/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item] = s
###Markdown
Code 6.23
###Code
mid = pd.Categorical(adults.loc[:, "married"].astype(int))
with pm.Model() as m_6_9:
a = pm.Normal("a", 0, 1, shape=2)
bA = pm.Normal("bA", 0, 2)
mu = a[mid] + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
m_6_9_trace = pm.sample(1000)
az.summary(m_6_9_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:03<00:00, 811.62draws/s]
###Markdown
Code 6.24
###Code
with pm.Model() as m6_10:
a = pm.Normal("a", 0, 1)
bA = pm.Normal("bA", 0, 2)
mu = a + bA * adults.A.values
sigma = pm.Exponential("sigma", 1)
happiness = pm.Normal("happiness", mu, sigma, observed=adults.happiness.values)
trace_6_10 = pm.sample(1000)
az.summary(trace_6_10, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, bA, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 3000/3000 [00:02<00:00, 1397.67draws/s]
###Markdown
Code 6.25
###Code
N = 200 # number of of grandparent-parent-child triads
b_GP = 1 # direct effect of G on P
b_GC = 0 # direct effect of G on C
b_PC = 1 # direct effect of P on C
b_U = 2 # direct effect of U on P and C
###Output
_____no_output_____
###Markdown
Code 6.26
###Code
U = 2 * np.random.binomial(1, 0.5, N) - 1
G = np.random.normal(size=N)
P = np.random.normal(b_GP * G + b_U * U)
C = np.random.normal(b_PC * P + b_GC * G + b_U * U)
d = pd.DataFrame.from_dict({"C": C, "P": P, "G": G, "U": U})
# Figure 6.5
# grandparent education
bad = U < 0
good = ~bad
plt.scatter(G[good], C[good], color="w", lw=1, edgecolor="C0")
plt.scatter(G[bad], C[bad], color="w", lw=1, edgecolor="k")
# parents with similar education
eP = (P > -1) & (P < 1)
plt.scatter(G[good & eP], C[good & eP], color="C0", lw=1, edgecolor="C0")
plt.scatter(G[bad & eP], C[bad & eP], color="k", lw=1, edgecolor="k")
p = np.polyfit(G[eP], C[eP], 1)
xn = np.array([-2, 3])
plt.plot(xn, np.polyval(p, xn))
plt.xlabel("grandparent education (G)")
plt.ylabel("grandchild education (C)")
###Output
_____no_output_____
###Markdown
Code 6.27
###Code
with pm.Model() as m_6_11:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_11_trace = pm.sample()
az.summary(m_6_11_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:01<00:00, 1373.03draws/s]
###Markdown
Code 6.28
###Code
with pm.Model() as m_6_12:
a = pm.Normal("a", 0, 1)
p_PC = pm.Normal("b_PC", 0, 1)
p_GC = pm.Normal("b_GC", 0, 1)
p_U = pm.Normal("b_U", 0, 1)
mu = a + p_PC * d.P + p_GC * d.G + p_U * d.U
sigma = pm.Exponential("sigma", 1)
pC = pm.Normal("C", mu, sigma, observed=d.C)
m_6_12_trace = pm.sample()
az.summary(m_6_12_trace, round_to=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, b_U, b_GC, b_PC, a]
Sampling 2 chains, 0 divergences: 100%|██████████| 2000/2000 [00:02<00:00, 713.85draws/s]
###Markdown
Code 6.29Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_1 = CausalGraphicalModel(
nodes=["X", "Y", "C", "U", "B", "A"],
edges=[
("X", "Y"),
("U", "X"),
("A", "U"),
("A", "C"),
("C", "Y"),
("U", "B"),
("C", "B"),
],
)
all_adjustment_sets = dag_6_1.get_all_backdoor_adjustment_sets("X", "Y")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
if s != {"U"}:
print(s)
###Output
frozenset({'A'})
frozenset({'C'})
###Markdown
Code 6.30Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
dag_6_2 = CausalGraphicalModel(
nodes=["S", "A", "D", "M", "W"],
edges=[
("S", "A"),
("A", "D"),
("S", "M"),
("M", "D"),
("S", "W"),
("W", "D"),
("A", "M"),
],
)
all_adjustment_sets = dag_6_2.get_all_backdoor_adjustment_sets("W", "D")
for s in all_adjustment_sets:
if all(not t.issubset(s) for t in all_adjustment_sets if t != s):
print(s)
###Output
frozenset({'S'})
frozenset({'M', 'A'})
###Markdown
Code 6.31Credit [ksachdeva](https://ksachdeva.github.io/rethinking-tensorflow-probability/)
###Code
all_independencies = dag_6_2.get_all_independence_relationships()
for s in all_independencies:
if all(
t[0] != s[0] or t[1] != s[1] or not t[2].issubset(s[2])
for t in all_independencies
if t != s
):
print(s)
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
seaborn 0.10.1
numpy 1.18.1
arviz 0.7.0
pandas 1.0.3
daft 0.1.0
pymc3 3.8
last updated: Sun May 10 2020
CPython 3.7.6
IPython 7.13.0
watermark 2.0.2
|
HeartDisease.ipynb | ###Markdown
Heart Disease Prediction
###Code
!pip install seaborn==0.9.0
import pandas as pd
import numpy as np
from fancyimpute import KNN
from scipy.stats import chi2_contingency
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.pyplot as plt
from random import randrange,uniform
from sklearn.model_selection import train_test_split
from sklearn import tree
from sklearn.tree import export_graphviz
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
import statsmodels.api as sn
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn import model_selection
from sklearn.metrics import classification_report,roc_auc_score,roc_curve
from sklearn.metrics import classification_report
import pickle
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.tools.tools import add_constant
np.random.seed(123)
pd.options.mode.chained_assignment = None
data = pd.read_csv("heart.csv")
data.head()
data.sample(5)
data.describe()
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
age 303 non-null int64
sex 303 non-null int64
cp 303 non-null int64
trestbps 303 non-null int64
chol 303 non-null int64
fbs 303 non-null int64
restecg 303 non-null int64
thalach 303 non-null int64
exang 303 non-null int64
oldpeak 303 non-null float64
slope 303 non-null int64
ca 303 non-null int64
thal 303 non-null int64
target 303 non-null int64
dtypes: float64(1), int64(13)
memory usage: 33.2 KB
###Markdown
About the dataset
###Code
info = ["age","1: male, 0: female","chest pain type, 1: typical angina, 2: atypical angina, 3: non-anginal pain, 4: asymptomatic","resting blood pressure"," serum cholestoral in mg/dl","fasting blood sugar > 120 mg/dl","resting electrocardiographic results (values 0,1,2)"," maximum heart rate achieved","exercise induced angina","oldpeak = ST depression induced by exercise relative to rest","the slope of the peak exercise ST segment","number of major vessels (0-3) colored by flourosopy","thal: 3 = normal; 6 = fixed defect; 7 = reversable defect"]
for i in range(len(info)):
print(data.columns[i]+":\t\t\t"+info[i])
###Output
age: age
sex: 1: male, 0: female
cp: chest pain type, 1: typical angina, 2: atypical angina, 3: non-anginal pain, 4: asymptomatic
trestbps: resting blood pressure
chol: serum cholestoral in mg/dl
fbs: fasting blood sugar > 120 mg/dl
restecg: resting electrocardiographic results (values 0,1,2)
thalach: maximum heart rate achieved
exang: exercise induced angina
oldpeak: oldpeak = ST depression induced by exercise relative to rest
slope: the slope of the peak exercise ST segment
ca: number of major vessels (0-3) colored by flourosopy
thal: thal: 3 = normal; 6 = fixed defect; 7 = reversable defect
###Markdown
-----------------------------------------------------------------------------------
###Code
type(data)
data.shape
###Output
_____no_output_____
###Markdown
----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- Exploratory Data Analysis (EDA)
###Code
y = data["target"]
sns.countplot(y)
target_temp = data.target.value_counts()
print(target_temp)
sns.barplot(data["sex"],data["target"])
###Output
_____no_output_____
###Markdown
We notice, that females are more likely to have heart problems than males Making the data more simple
###Code
data.columns = ['age', 'sex', 'chest_pain_type', 'resting_blood_pressure', 'cholesterol', 'fasting_blood_sugar', 'rest_ecg', 'max_heart_rate_achieved',
'exercise_induced_angina', 'st_depression', 'st_slope', 'num_major_vessels', 'thalassemia', 'target']
data['sex'][data['sex'] == 0] = 'female'
data['sex'][data['sex'] == 1] = 'male'
data['chest_pain_type'][data['chest_pain_type'] == 1] = 'typical angina'
data['chest_pain_type'][data['chest_pain_type'] == 2] = 'atypical angina'
data['chest_pain_type'][data['chest_pain_type'] == 3] = 'non-anginal pain'
data['chest_pain_type'][data['chest_pain_type'] == 4] = 'asymptomatic'
data['fasting_blood_sugar'][data['fasting_blood_sugar'] == 0] = 'lower than 120mg/ml'
data['fasting_blood_sugar'][data['fasting_blood_sugar'] == 1] = 'greater than 120mg/ml'
data['rest_ecg'][data['rest_ecg'] == 0] = 'normal'
data['rest_ecg'][data['rest_ecg'] == 1] = 'ST-T wave abnormality'
data['rest_ecg'][data['rest_ecg'] == 2] = 'left ventricular hypertrophy'
data['exercise_induced_angina'][data['exercise_induced_angina'] == 0] = 'no'
data['exercise_induced_angina'][data['exercise_induced_angina'] == 1] = 'yes'
data['st_slope'][data['st_slope'] == 1] = 'upsloping'
data['st_slope'][data['st_slope'] == 2] = 'flat'
data['st_slope'][data['st_slope'] == 3] = 'downsloping'
data['thalassemia'][data['thalassemia'] == 1] = 'normal'
data['thalassemia'][data['thalassemia'] == 2] = 'fixed defect'
data['thalassemia'][data['thalassemia'] == 3] = 'reversable defect'
data['target'][data['target'] == 0] = 'no'
data['target'][data['target'] == 1] = 'yes'
###Output
_____no_output_____
###Markdown
Percentage of patients with or without heart problems assigning levels to categories
###Code
list = []
for i in range(0,data.shape[1]):
if(data.iloc[:,i].dtypes == 'object'):
data.iloc[:,i] = pd.Categorical(data.iloc[:,i])
data.iloc[:,i] = data.iloc[:,i].cat.codes
data.iloc[:,i] = data.iloc[:,i].astype('object')
list.append(data.columns[i])
sns.countplot(x='target',data=data,palette="bwr")
plt.show()
countFemale = len(data[data.sex == 0])
countMale = len(data[data.sex == 1])
print("Percentage of Female Patients:{:.2f}%".format((countFemale)/(len(data.sex))*100))
print("Percentage of Male Patients:{:.2f}%".format((countMale)/(len(data.sex))*100))
###Output
Percentage of Female Patients:31.68%
Percentage of Male Patients:68.32%
###Markdown
-------------------------------------------------------------------------------------------------------------------------------
###Code
countNoDisease = len(data[data.target == 0])
countHaveDisease = len(data[data.target == 1])
print("Percentage of Patients Haven't Heart Disease: {:.2f}%".format((countNoDisease / (len(data.target))*100)))
print("Percentage of Patients Have Heart Disease: {:.2f}%".format((countHaveDisease / (len(data.target))*100)))
data.groupby('target').mean()
###Output
_____no_output_____
###Markdown
------------------------------------------------------------------------------------ Heart disease frequency for ages
###Code
pd.crosstab(data.age,data.target).plot(kind="bar",figsize=(20,6))
plt.title('Heart Disease Frequency for Ages')
plt.xlabel('Age')
plt.ylabel('Frequency')
plt.savefig('heartDiseaseAndAges.png')
plt.show()
###Output
_____no_output_____
###Markdown
Heart Disease Frequency for male and female
###Code
pd.crosstab(data.sex,data.target).plot(kind="bar",figsize=(15,6),color=['blue','#AA1111' ])
plt.title('Heart Disease Frequency for Sex')
plt.xlabel('Sex (0 = Female, 1 = Male)')
plt.xticks(rotation=0)
plt.legend(["Haven't Disease", "Have Disease"])
plt.ylabel('Frequency')
plt.show()
###Output
_____no_output_____
###Markdown
Thalassemia vs cholesterol
###Code
plt.figure(figsize=(8,6))
sns.scatterplot(x='cholesterol',y='thalassemia',data=data,hue='target')
plt.show()
###Output
_____no_output_____
###Markdown
Thalassemia vs resting blood pressure
###Code
plt.figure(figsize=(8,6))
sns.scatterplot(x='thalassemia',y='resting_blood_pressure',data=data,hue='target')
plt.show()
###Output
_____no_output_____
###Markdown
Age vs Maximum heart disease rate
###Code
plt.scatter(x=data.age[data.target==1], y=data.thalassemia[(data.target==1)], c="green")
plt.scatter(x=data.age[data.target==0], y=data.thalassemia[(data.target==0)])
plt.legend(["Disease", "Not Disease"])
plt.xlabel("Age")
plt.ylabel("Maximum Heart Rate")
plt.show()
###Output
_____no_output_____
###Markdown
Fasting Blood sugar Data
###Code
pd.crosstab(data.fasting_blood_sugar,data.target).plot(kind="bar",figsize=(20,10),color=['#4286f4','#f49242'])
plt.title("Heart disease according to FBS")
plt.xlabel('FBS- (Fasting Blood Sugar > 120 mg/dl) (1 = true; 0 = false)')
plt.xticks(rotation=90)
plt.legend(["Haven't Disease", "Have Disease"])
plt.ylabel('Disease or not')
plt.show()
###Output
_____no_output_____
###Markdown
Missing Value Analysis
###Code
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
names=['age','resting_blood_pressure','cholesterol','max_heart_rate_achieved','st_depression','num_major_vessels']
#Set the width and height of the plot
f, ax = plt.subplots(figsize=(7, 5))
#Correlation plot
df_corr = data.loc[:,names]
#Generate correlation matrix
corr = df_corr.corr()
#Plot using seaborn library
sns.heatmap(corr, annot = True, cmap='coolwarm',linewidths=.1)
plt.show()
###Output
_____no_output_____
###Markdown
Correlation analysis
###Code
df_corr
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
predictors = data.drop("target",axis=1)
target = data["target"]
X_train,X_test,Y_train,Y_test = train_test_split(predictors,target,test_size=0.20,random_state=0)
X_train.shape
X_test.shape
Y_train.shape
Y_test.shape
###Output
_____no_output_____
###Markdown
Model Fitting Naive Bayes---
###Code
nb = GaussianNB()
Y_train=Y_train.astype('int')
nb.fit(X_train,Y_train)
Y_pred_nb = nb.predict(X_test)
Y_pred_nb.shape
# build confusion metrics
CM=pd.crosstab(Y_test,Y_pred_nb)
CM
#let us save TP, TN, FP, FN
TN=CM.iloc[0,0]
FP=CM.iloc[0,1]
FN=CM.iloc[1,0]
TP=CM.iloc[1,1]
#check accuracy of model
score_nb=((TP+TN)*100)/(TP+TN+FP+FN)
score_nb
# check false negative rate of the model
fnr=FN*100/(FN+TP)
fnr
###Output
_____no_output_____
###Markdown
--- Decision Tree
###Code
# replace target variable with yes or no
#data['target'] = data['target'].replace(0, 'No')
#data['target'] = data['target'].replace(1, 'Yes')
# to handle data imbalance issue we are dividing our dataset on basis of stratified sampling
# divide data into train and test
#X=data.values[:,0:13]
#Y=data.values[:,13]
#X_train, X_test, Y_train, Y_test = train_test_split( X, Y, test_size = 0.2)
# Decision tree - we will build the model on train data and test it on test data
C50_model = tree.DecisionTreeClassifier(criterion='entropy').fit(X_train, Y_train)
# predict new test cases
C50_Predictions = C50_model.predict(X_test) # applying decision tree model on test data set
#Create dot file to visualise tree #http://webgraphviz.com/
dotfile = open("pt.dot", 'w')
df = tree.export_graphviz(C50_model, out_file=dotfile,feature_names=data1.columns)
# Confusion matrix of decision tree
CM = pd.crosstab(Y_test, C50_Predictions)
CM
#let us save TP, TN, FP, FN
TN=CM.iloc[0,0]
FP=CM.iloc[0,1]
FN=CM.iloc[1,0]
TP=CM.iloc[1,1]
#check accuracy of model
score_dt=((TP+TN)*100)/(TP+TN+FP+FN)
score_dt
# check false negative rate of the model
fnr=FN*100/(FN+TP)
fnr
###Output
_____no_output_____
###Markdown
KNN(K Nearest Neighbors) for neighbors = 7
###Code
knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(X_train,Y_train)
Y_pred_knn=knn.predict(X_test)
Y_pred_knn.shape
score_knn_7 = round(accuracy_score(Y_pred_knn,Y_test)*100,2)
print("The accuracy score achieved using KNN is: "+str(score_knn)+" %")
###Output
The accuracy score achieved using KNN is: 67.21 %
###Markdown
for neighbors = 4
###Code
knn_model=KNeighborsClassifier(n_neighbors=4).fit(X_train,Y_train)
knn_predictions=knn_model.predict(X_test)
# build confusion metrics
CM=pd.crosstab(y_test,knn_predictions)
CM
# try K=1 through K=25 and record testing accuracy
k_range = range(1, 26)
# We can create Python dictionary using [] or dict()
scores = []
from sklearn import metrics
# We use a loop through the range 1 to 26
# We append the scores in the dictionary
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
scores.append(metrics.accuracy_score(y_test, y_pred))
print(scores)
#let us save TP, TN, FP, FN
TN=CM.iloc[0,0]
FP=CM.iloc[0,1]
FN=CM.iloc[1,0]
TP=CM.iloc[1,1]
#check accuracy of model
score_knn_4=((TP+TN)*100)/(TP+TN+FP+FN)
score_knn_4
# check false negative rate of the model
fnr=FN*100/(FN+TP)
fnr
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,Y_train)
Y_pred_lr = lr.predict(X_test)
Y_pred_lr.shape
score_lr = round(accuracy_score(Y_pred_lr,Y_test)*100,2)
print("The accuracy score achieved using Logistic Regression is: "+str(score_lr)+" %")
###Output
The accuracy score achieved using Logistic Regression is: 86.89 %
###Markdown
Final Score
###Code
scores = [score_lr,score_nb,score_knn,score_dt]
algorithms = ["Logistic Regression","Naive Bayes","K-Nearest Neighbors","Decision Tree"]
for i in range(len(algorithms)):
print("The accuracy score achieved using "+algorithms[i]+" is: "+str(scores[i])+" %")
sns.set(rc={'figure.figsize':(15,8)})
plt.xlabel("Algorithms")
plt.ylabel("Accuracy score")
sns.barplot(algorithms,scores)
###Output
_____no_output_____
###Markdown
Heart Disease UCI
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import zipfile
#!pip install kaggle
#1)Heart.csv파일 열기
!kaggle datasets download -d ronitf/heart-disease-uci
zf=zipfile.ZipFile('heart-disease-uci.zip')
z=zf.extractall()
data=pd.read_csv('heart.csv')
#3)데이터 타입들, 상하위 5개 데이터 확인
display(data.head())
display(data.tail())
#5)AHD(=target) 변경, No=0, Yes=1
#기존의 csv의 파일의 target값이 이진수로 존재하여 변경할 필요없음
#6)통계요약
data.describe()
#7)누락된 값 있는 행삭제
data.dropna()
#8)나이의 histogram(분포 확인)
data['age'].hist(rwidth=0.9)
plt.title('Histogram of age')
plt.ylabel('Frequency')
plt.xlabel('age')
plt.show()
#9)성별에 대한 pie plot (백분율로 표시)
sex=data['sex'].value_counts()
sex.plot.pie(legend=True, autopct='%.2f%%')
plt.title('Sex')
plt.show()
#10)ChestPain 에 대한 카운트 bar chart
cp=data['cp'].value_counts()
cp.plot.bar()
plt.title('ChestPain')
plt.xlabel('chest pain type')
plt.ylabel('Frequency')
plt.show()
#11)나이와 최대심박수, 나이와 혈압과의 관계 확인 (상관계수, scatter plot)
print("나이와 최대심박수의 상관계수 = ", data['age'].corr(data['thalach']))
print("나이와 혈압의 상관계수 = ", data['age'].corr(data['trestbps']))
data.plot.scatter(x='age', y='thalach', c='red')
data.plot.scatter(x='age', y='trestbps', c='green')
#12)심장병(target(AHD))과 상관관계가 높은 속성중 4개만 선택하여 심장병과의 관계 분석 scatter_matrix()
print("심장병과의 상관관계 = \n",data.corrwith(data['target']))
dt=data[['cp', 'restecg', 'thalach', 'slope']]
print("심장병과 상관관계가 높은 4 속성 = ")
display(dt)
pd.plotting.scatter_matrix(dt, s=60, diagonal='kde', cmap='spring')
plt.show
#13)4개 특징의 분포 확인 (boxplot)
dt.boxplot()
plt.show()
#14)target(AHD)을 기준(groupby)으로 최대, 최소, 평균, 표준편차 계산
ahd=data.groupby('target')
print("AHD를 기준으로 최대=\n")
display(ahd.max())
print("AHD를 기준으로 최소=\n")
display(ahd.min())
print("AHD를 기준으로 표준편차=\n")
display(ahd.std())
#15)4개 특징과 target(AHD)를 이용하여 학습데이터, 테스트데이터 집합 준비 (테스트 데이터는 전체에 20%)
from sklearn.model_selection import train_test_split
dt['target']=data['target']
X = dt.iloc[:, :-1].values
y = dt['target'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
#16) 1-NN~ 9-NN 테스트 성능 비교 (그래프로 확인)
#!pip install mglearn
import mglearn
from sklearn.neighbors import KNeighborsClassifier
training_accuracy = []
test_accuracy = []
neighbors_settings = range(1, 10)
for n_neighbors in neighbors_settings:
# 모델 생성
clf = KNeighborsClassifier(n_neighbors=n_neighbors)
clf.fit(X_train, y_train)
# 훈련 세트 정확도 저장
training_accuracy.append(clf.score(X_train, y_train))
# 일반화 정확도 저장
test_accuracy.append(clf.score(X_test, y_test))
plt.plot(neighbors_settings, training_accuracy, label="train accuracy")
plt.plot(neighbors_settings, test_accuracy, label="test accuracy")
plt.ylabel("accuracy")
plt.xlabel("n_neighbors")
plt.legend()
###Output
c:\users\wlgh3\venv\tensorflow\lib\site-packages\sklearn\externals\six.py:31: DeprecationWarning: The module is deprecated in version 0.21 and will be removed in version 0.23 since we've dropped support for Python 2.7. Please rely on the official version of six (https://pypi.org/project/six/).
"(https://pypi.org/project/six/).", DeprecationWarning)
c:\users\wlgh3\venv\tensorflow\lib\site-packages\sklearn\externals\joblib\__init__.py:15: DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.
warnings.warn(msg, category=DeprecationWarning)
###Markdown
Heart Disease Prediction
###Code
!pip install seaborn==0.9.0
import pandas as pd
import numpy as np
from fancyimpute import KNN
from scipy.stats import chi2_contingency
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.pyplot as plt
from random import randrange,uniform
from sklearn.model_selection import train_test_split
from sklearn import tree
from sklearn.tree import export_graphviz
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
import statsmodels.api as sn
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn import model_selection
from sklearn.metrics import classification_report,roc_auc_score,roc_curve
from sklearn.metrics import classification_report
import pickle
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.tools.tools import add_constant
np.random.seed(123)
pd.options.mode.chained_assignment = None
data = pd.read_csv("heart.csv")
data.head()
data.sample(5)
data.describe()
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
age 303 non-null int64
sex 303 non-null int64
cp 303 non-null int64
trestbps 303 non-null int64
chol 303 non-null int64
fbs 303 non-null int64
restecg 303 non-null int64
thalach 303 non-null int64
exang 303 non-null int64
oldpeak 303 non-null float64
slope 303 non-null int64
ca 303 non-null int64
thal 303 non-null int64
target 303 non-null int64
dtypes: float64(1), int64(13)
memory usage: 33.2 KB
###Markdown
About the dataset
###Code
info = ["age","1: male, 0: female","chest pain type, 1: typical angina, 2: atypical angina, 3: non-anginal pain, 4: asymptomatic","resting blood pressure"," serum cholestoral in mg/dl","fasting blood sugar > 120 mg/dl","resting electrocardiographic results (values 0,1,2)"," maximum heart rate achieved","exercise induced angina","oldpeak = ST depression induced by exercise relative to rest","the slope of the peak exercise ST segment","number of major vessels (0-3) colored by flourosopy","thal: 3 = normal; 6 = fixed defect; 7 = reversable defect"]
for i in range(len(info)):
print(data.columns[i]+":\t\t\t"+info[i])
###Output
age: age
sex: 1: male, 0: female
cp: chest pain type, 1: typical angina, 2: atypical angina, 3: non-anginal pain, 4: asymptomatic
trestbps: resting blood pressure
chol: serum cholestoral in mg/dl
fbs: fasting blood sugar > 120 mg/dl
restecg: resting electrocardiographic results (values 0,1,2)
thalach: maximum heart rate achieved
exang: exercise induced angina
oldpeak: oldpeak = ST depression induced by exercise relative to rest
slope: the slope of the peak exercise ST segment
ca: number of major vessels (0-3) colored by flourosopy
thal: thal: 3 = normal; 6 = fixed defect; 7 = reversable defect
###Markdown
-----------------------------------------------------------------------------------
###Code
type(data)
data.shape
###Output
_____no_output_____
###Markdown
----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- Exploratory Data Analysis (EDA)
###Code
y = data["target"]
sns.countplot(y)
target_temp = data.target.value_counts()
print(target_temp)
sns.barplot(data["sex"],data["target"])
###Output
_____no_output_____
###Markdown
We notice, that females are more likely to have heart problems than males Making the data more simple
###Code
data.columns = ['age', 'sex', 'chest_pain_type', 'resting_blood_pressure', 'cholesterol', 'fasting_blood_sugar', 'rest_ecg', 'max_heart_rate_achieved',
'exercise_induced_angina', 'st_depression', 'st_slope', 'num_major_vessels', 'thalassemia', 'target']
data['sex'][data['sex'] == 0] = 'female'
data['sex'][data['sex'] == 1] = 'male'
data['chest_pain_type'][data['chest_pain_type'] == 1] = 'typical angina'
data['chest_pain_type'][data['chest_pain_type'] == 2] = 'atypical angina'
data['chest_pain_type'][data['chest_pain_type'] == 3] = 'non-anginal pain'
data['chest_pain_type'][data['chest_pain_type'] == 4] = 'asymptomatic'
data['fasting_blood_sugar'][data['fasting_blood_sugar'] == 0] = 'lower than 120mg/ml'
data['fasting_blood_sugar'][data['fasting_blood_sugar'] == 1] = 'greater than 120mg/ml'
data['rest_ecg'][data['rest_ecg'] == 0] = 'normal'
data['rest_ecg'][data['rest_ecg'] == 1] = 'ST-T wave abnormality'
data['rest_ecg'][data['rest_ecg'] == 2] = 'left ventricular hypertrophy'
data['exercise_induced_angina'][data['exercise_induced_angina'] == 0] = 'no'
data['exercise_induced_angina'][data['exercise_induced_angina'] == 1] = 'yes'
data['st_slope'][data['st_slope'] == 1] = 'upsloping'
data['st_slope'][data['st_slope'] == 2] = 'flat'
data['st_slope'][data['st_slope'] == 3] = 'downsloping'
data['thalassemia'][data['thalassemia'] == 1] = 'normal'
data['thalassemia'][data['thalassemia'] == 2] = 'fixed defect'
data['thalassemia'][data['thalassemia'] == 3] = 'reversable defect'
data['target'][data['target'] == 0] = 'no'
data['target'][data['target'] == 1] = 'yes'
###Output
_____no_output_____
###Markdown
Percentage of patients with or without heart problems assigning levels to categories
###Code
list = []
for i in range(0,data.shape[1]):
if(data.iloc[:,i].dtypes == 'object'):
data.iloc[:,i] = pd.Categorical(data.iloc[:,i])
data.iloc[:,i] = data.iloc[:,i].cat.codes
data.iloc[:,i] = data.iloc[:,i].astype('object')
list.append(data.columns[i])
sns.countplot(x='target',data=data,palette="bwr")
plt.show()
countFemale = len(data[data.sex == 0])
countMale = len(data[data.sex == 1])
print("Percentage of Female Patients:{:.2f}%".format((countFemale)/(len(data.sex))*100))
print("Percentage of Male Patients:{:.2f}%".format((countMale)/(len(data.sex))*100))
###Output
Percentage of Female Patients:31.68%
Percentage of Male Patients:68.32%
###Markdown
-------------------------------------------------------------------------------------------------------------------------------
###Code
countNoDisease = len(data[data.target == 0])
countHaveDisease = len(data[data.target == 1])
print("Percentage of Patients Haven't Heart Disease: {:.2f}%".format((countNoDisease / (len(data.target))*100)))
print("Percentage of Patients Have Heart Disease: {:.2f}%".format((countHaveDisease / (len(data.target))*100)))
data.groupby('target').mean()
###Output
_____no_output_____
###Markdown
------------------------------------------------------------------------------------ Heart disease frequency for ages
###Code
pd.crosstab(data.age,data.target).plot(kind="bar",figsize=(20,6))
plt.title('Heart Disease Frequency for Ages')
plt.xlabel('Age')
plt.ylabel('Frequency')
plt.savefig('heartDiseaseAndAges.png')
plt.show()
###Output
_____no_output_____
###Markdown
Heart Disease Frequency for male and female
###Code
pd.crosstab(data.sex,data.target).plot(kind="bar",figsize=(15,6),color=['blue','#AA1111' ])
plt.title('Heart Disease Frequency for Sex')
plt.xlabel('Sex (0 = Female, 1 = Male)')
plt.xticks(rotation=0)
plt.legend(["Haven't Disease", "Have Disease"])
plt.ylabel('Frequency')
plt.show()
###Output
_____no_output_____
###Markdown
Thalassemia vs cholesterol
###Code
plt.figure(figsize=(8,6))
sns.scatterplot(x='cholesterol',y='thalassemia',data=data,hue='target')
plt.show()
###Output
_____no_output_____
###Markdown
Thalassemia vs resting blood pressure
###Code
plt.figure(figsize=(8,6))
sns.scatterplot(x='thalassemia',y='resting_blood_pressure',data=data,hue='target')
plt.show()
###Output
_____no_output_____
###Markdown
Age vs Maximum heart disease rate
###Code
plt.scatter(x=data.age[data.target==1], y=data.thalassemia[(data.target==1)], c="green")
plt.scatter(x=data.age[data.target==0], y=data.thalassemia[(data.target==0)])
plt.legend(["Disease", "Not Disease"])
plt.xlabel("Age")
plt.ylabel("Maximum Heart Rate")
plt.show()
###Output
_____no_output_____
###Markdown
Fasting Blood sugar Data
###Code
pd.crosstab(data.fasting_blood_sugar,data.target).plot(kind="bar",figsize=(20,10),color=['#4286f4','#f49242'])
plt.title("Heart disease according to FBS")
plt.xlabel('FBS- (Fasting Blood Sugar > 120 mg/dl) (1 = true; 0 = false)')
plt.xticks(rotation=90)
plt.legend(["Haven't Disease", "Have Disease"])
plt.ylabel('Disease or not')
plt.show()
###Output
_____no_output_____
###Markdown
Missing Value Analysis
###Code
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
names=['age','resting_blood_pressure','cholesterol','max_heart_rate_achieved','st_depression','num_major_vessels']
#Set the width and height of the plot
f, ax = plt.subplots(figsize=(7, 5))
#Correlation plot
df_corr = data.loc[:,names]
#Generate correlation matrix
corr = df_corr.corr()
#Plot using seaborn library
sns.heatmap(corr, annot = True, cmap='coolwarm',linewidths=.1)
plt.show()
###Output
_____no_output_____
###Markdown
Correlation analysis
###Code
df_corr
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
predictors = data.drop("target",axis=1)
target = data["target"]
X_train,X_test,Y_train,Y_test = train_test_split(predictors,target,test_size=0.20,random_state=0)
X_train.shape
X_test.shape
Y_train.shape
Y_test.shape
###Output
_____no_output_____
###Markdown
Model Fitting Naive Bayes---
###Code
nb = GaussianNB()
Y_train=Y_train.astype('int')
nb.fit(X_train,Y_train)
Y_pred_nb = nb.predict(X_test)
Y_pred_nb.shape
# build confusion metrics
CM=pd.crosstab(Y_test,Y_pred_nb)
CM
#let us save TP, TN, FP, FN
TN=CM.iloc[0,0]
FP=CM.iloc[0,1]
FN=CM.iloc[1,0]
TP=CM.iloc[1,1]
#check accuracy of model
score_nb=((TP+TN)*100)/(TP+TN+FP+FN)
score_nb
# check false negative rate of the model
fnr=FN*100/(FN+TP)
fnr
###Output
_____no_output_____
###Markdown
--- Decision Tree
###Code
# replace target variable with yes or no
#data['target'] = data['target'].replace(0, 'No')
#data['target'] = data['target'].replace(1, 'Yes')
# to handle data imbalance issue we are dividing our dataset on basis of stratified sampling
# divide data into train and test
#X=data.values[:,0:13]
#Y=data.values[:,13]
#X_train, X_test, Y_train, Y_test = train_test_split( X, Y, test_size = 0.2)
# Decision tree - we will build the model on train data and test it on test data
C50_model = tree.DecisionTreeClassifier(criterion='entropy').fit(X_train, Y_train)
# predict new test cases
C50_Predictions = C50_model.predict(X_test) # applying decision tree model on test data set
#Create dot file to visualise tree #http://webgraphviz.com/
dotfile = open("pt.dot", 'w')
df = tree.export_graphviz(C50_model, out_file=dotfile,feature_names=data1.columns)
# Confusion matrix of decision tree
CM = pd.crosstab(Y_test, C50_Predictions)
CM
#let us save TP, TN, FP, FN
TN=CM.iloc[0,0]
FP=CM.iloc[0,1]
FN=CM.iloc[1,0]
TP=CM.iloc[1,1]
#check accuracy of model
score_dt=((TP+TN)*100)/(TP+TN+FP+FN)
score_dt
# check false negative rate of the model
fnr=FN*100/(FN+TP)
fnr
###Output
_____no_output_____
###Markdown
KNN(K Nearest Neighbors) for neighbors = 7
###Code
knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(X_train,Y_train)
Y_pred_knn=knn.predict(X_test)
Y_pred_knn.shape
score_knn_7 = round(accuracy_score(Y_pred_knn,Y_test)*100,2)
print("The accuracy score achieved using KNN is: "+str(score_knn)+" %")
###Output
The accuracy score achieved using KNN is: 67.21 %
###Markdown
for neighbors = 4
###Code
knn_model=KNeighborsClassifier(n_neighbors=4).fit(X_train,Y_train)
knn_predictions=knn_model.predict(X_test)
# build confusion metrics
CM=pd.crosstab(y_test,knn_predictions)
CM
# try K=1 through K=25 and record testing accuracy
k_range = range(1, 26)
# We can create Python dictionary using [] or dict()
scores = []
from sklearn import metrics
# We use a loop through the range 1 to 26
# We append the scores in the dictionary
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
scores.append(metrics.accuracy_score(y_test, y_pred))
print(scores)
#let us save TP, TN, FP, FN
TN=CM.iloc[0,0]
FP=CM.iloc[0,1]
FN=CM.iloc[1,0]
TP=CM.iloc[1,1]
#check accuracy of model
score_knn_4=((TP+TN)*100)/(TP+TN+FP+FN)
score_knn_4
# check false negative rate of the model
fnr=FN*100/(FN+TP)
fnr
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,Y_train)
Y_pred_lr = lr.predict(X_test)
Y_pred_lr.shape
score_lr = round(accuracy_score(Y_pred_lr,Y_test)*100,2)
print("The accuracy score achieved using Logistic Regression is: "+str(score_lr)+" %")
###Output
The accuracy score achieved using Logistic Regression is: 86.89 %
###Markdown
Final Score
###Code
scores = [score_lr,score_nb,score_knn,score_dt]
algorithms = ["Logistic Regression","Naive Bayes","K-Nearest Neighbors","Decision Tree"]
for i in range(len(algorithms)):
print("The accuracy score achieved using "+algorithms[i]+" is: "+str(scores[i])+" %")
sns.set(rc={'figure.figsize':(15,8)})
plt.xlabel("Algorithms")
plt.ylabel("Accuracy score")
sns.barplot(algorithms,scores)
###Output
_____no_output_____
###Markdown
**Heart Disease**You may find the dataset that I used [here](https://www.kaggle.com/ronitf/heart-disease-uci) CreditsCreators:1. Hungarian Institute of Cardiology. Budapest: Andras Janosi, MD.2. University Hospital, Zurich, Switzerland: William Steinbrunn, M.D.3. V.A. Medical Center, Long Beach and Cleveland Clinic Foundation: Robert Detrano, M.D., Ph.D.Donor:1. David W. Aha (aha '@' ics.uci.edu) (714) 856-8779 **Setup**---
###Code
# import necessary libraries
import pandas as pd
from sklearn.feature_selection import chi2, SelectKBest
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn import preprocessing
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
import plotly.express as px
import numpy as np
import warnings
from numpy import arange
warnings.simplefilter('ignore')
#. import the data and view a preview
data = pd.read_csv('heart.csv')
data.head()
###Output
_____no_output_____
###Markdown
**Data Visualization**--- We will assume that the data is not heuristic. The data came from a credible source. (UCI) Plot of mean features of each target
###Code
#. group by target and get the mean of each feature
tar = data.groupby('target')
tar.agg('mean').plot(kind='bar')
###Output
_____no_output_____
###Markdown
Changing the color pallete for the below plot
###Code
ui = ['#47476b','#ff0000']
sns.set_palette(ui)
###Output
_____no_output_____
###Markdown
We have around a equal split of both of the classes.
###Code
sns.countplot(data['target'])
plt.show()
###Output
_____no_output_____
###Markdown
Change the color pallete to help color blind people see the graphs
###Code
sns.set_palette(sns.color_palette("Paired"))
#. get the confirmed cases
cases = data.query('target == 1')
###Output
_____no_output_____
###Markdown
Do males have a higher chance of having heart disease?
###Code
sns.countplot(cases['sex'])
###Output
_____no_output_____
###Markdown
Exang(exercise induced angina) is common in heart disease patients
###Code
sns.countplot(cases['exang'])
###Output
_____no_output_____
###Markdown
How do resting electrocardiographic results affect heart disease?
###Code
sns.countplot(cases['restecg'])
###Output
_____no_output_____
###Markdown
A decent µ is approximately 53 and σ is approximately 9.5
###Code
np.mean(cases['age'])
np.std(cases['age'])
sns.distplot(cases['age'])
plt.show()
###Output
_____no_output_____
###Markdown
Define the dependent and independent features
###Code
X = data.drop(columns=['target'])
Y = data['target']
###Output
_____no_output_____
###Markdown
Plot of the features
###Code
plt.figure(figsize=(45,45))
X.plot()
plt.show()
corr = data.corr()
mask = np.triu(np.ones_like(corr, dtype=np.bool))
sns.heatmap(corr,mask=mask)
plt.show()
###Output
_____no_output_____
###Markdown
**Get Important Features and Define Functions**---
###Code
def find_optimal_params(model,params):
"""
Get Optimal Parameters for a model
:param Model: Machine Learning model
:param Params: Altering params and ranges
:return: a json of params and their optimal value
"""
grid = GridSearchCV(model,param_grid=params)
grid.fit(X_train,y_train)
return grid.best_params_
###Output
_____no_output_____
###Markdown
Finding features that affect heart disease mostly
###Code
features = SelectKBest(chi2,k=7)
features.fit(X,Y)
X.columns[features.get_support()]
#. keep track of the information
models = []
scores = []
X_train,x_test,y_train,y_test = train_test_split(X,Y)
###Output
_____no_output_____
###Markdown
**Logistic Regression**---
###Code
print("Finding the optimal params for the C param for the Logistic Regression model ")
#. find the optimal param with a predefined function
optimal_param = find_optimal_params(LogisticRegression(),{'C':range(0,110,5)})['C']
print("\nOptimal Param for C =",optimal_param)
print("\n\t\tLogistic Regression Model\n")
#. make the LogisticRegression model
model = LogisticRegression(C = 20).fit(X_train,y_train)
#. get the test score and append it to scores list
test_score = model.score(x_test,y_test)
scores.append(test_score)
test_predicted = model.predict(x_test)
print(classification_report(y_test,test_predicted))
sns.heatmap(confusion_matrix(y_test,predicted),annot=True)
plt.show()
print("Cross Validation Score\n")
np.average(cross_val_score(LogisticRegression(C = 45),X,Y))
###Output
Cross Validation Score
###Markdown
**Support Vector Machine**---
###Code
print("Finding the optimal params for the C param for the SVC model ")
#. find the optimal param with a predefined function
optimal_param = find_optimal_params(SVC(kernel='rbf'),{'C':range(0,110,5)})['C']
print("\nOptimal Param for C =",optimal_param)
print("\n\t\tSVC Model with Radial Basis Kernel\n")
model = SVC(C = 90,kernel='rbf').fit(X_train,y_train)
test_score = model.score(x_test,y_test)
scores.append(test_score)
predicted = model.predict(x_test)
print(classification_report(y_test,predicted))
sns.heatmap(confusion_matrix(y_test,predicted),annot=True)
plt.show()
print("Cross Validation Score\n")
np.average(cross_val_score(SVC(C = 90),X,Y))
###Output
Cross Validation Score
###Markdown
**Random Forest Classifier**---
###Code
print("Finding the optimal params for the n_estimators param for the RandomForestClassifier model ")
optimal_param = find_optimal_params(RandomForestClassifier(),{'n_estimators':range(0,20,5)})['n_estimators']
print("\nOptimal Param for n_estimators =",optimal_param)
print("\n\t\tRandomForestClassifier with Radial Basis Kernel\n")
model = RandomForestClassifier(n_estimators=15).fit(X_train,y_train)
scores.append(model.score(x_test,y_test))
predicted = model.predict(x_test)
print(classification_report(y_test,predicted))
sns.heatmap(confusion_matrix(y_test,predicted),annot=True)
plt.show()
print("Cross Validation Score\n")
np.average(cross_val_score(RandomForestClassifier(n_estimators=15),X,Y))
###Output
Cross Validation Score
###Markdown
**Adaboost Classifier**---
###Code
print("Finding the optimal params for the learningrate and n_estimaters param for the AdaboostClassifier model ")
optimal_param = find_optimal_params(AdaBoostClassifier(),{'learning_rate':arange(0.01,1,0.01),'n_estimators':range(1,5,50)})
print("\nOptimal Param for n_estimators =",optimal_param)
print("\n\t\tAdaboost Classifier\n")
model = AdaBoostClassifier(learning_rate=0.01,n_estimators=1).fit(X_train,y_train)
scores.append(model.score(x_test,y_test))
predicted = model.predict(x_test)
print(classification_report(y_test,predicted))
sns.heatmap(confusion_matrix(y_test,predicted),annot=True)
plt.show()
print("Cross Validation Score\n")
np.average(cross_val_score(AdaBoostClassifier(learning_rate=0.01,n_estimators=1),X,Y))
###Output
Cross Validation Score
###Markdown
**Conclusion and AfterMarks**---
###Code
classifiers = ['Logistic Regression','Support Vector Machine','Random Forest Classifier','Adaboost Classifier']
data = pd.DataFrame({'Classifiers':classifiers,'Scores':scores})
data.head()
px.bar(data,x="Classifiers",y='Scores',title="Test Scores")
###Output
_____no_output_____ |
data_assimilation/01a_lorenz.ipynb | ###Markdown
Lorenz's (1963) model: forward modelling$$\newcommand{\myd}{\mathrm{d}}\newcommand{\statev}{\mathbf{X}}\newcommand{\lrayleigh}{{\mathrm{r}}}\newcommand{\rayleigh}{{\mathrm{Ra}}}\newcommand{\pr}{\mathrm{Pr}}\newcommand{\adj}{T}\newcommand{\tstep}{\Delta t}$$ IntroductionA good understanding of the forward model is mandatory before any practice of data assimilation. The goal of this first notebook is to get familiar with the numerical model we are dealing with today. The model we are interested in is [the famous model proposed by Edward Lorenz in 1963](https://doi.org/10.1175/1520-0469(1963)020%3C0130:DNF%3E2.0.CO;2). This model is the canonical example of a set of coupled deterministic, nonlinear, ordinary differential equations (ode) able to exhibit chaotic behaviour. It is a simplified, 3-variable representation of atmospheric cellular convection, based on the earlier work of Saltzman (1962). Its time evolution is governed by the following set of nondimensional equations $$\begin{eqnarray}\frac{\myd X}{\myd t} &=& -\pr (X -Y), \label{eq:lorx}\\ \frac{\myd Y}{\myd t} &=& -XZ +\lrayleigh X -Y, \label{eq:lory}\\\frac{\myd Z}{\myd t} &=& XY - bZ,\label{eq:lorz} \end{eqnarray}$$which has to be supplemented with a (column) vector of initial conditions$$\statev_0=\left[X(t=0),Y(t=0),Z(t=0) \right]^{\adj}.$$The variable $X$ is connected with the streamfunction describing atmospheric flow, while both variables $Y$ and $Z$ are connected with the temperature deviation responsible for convection. So the state of atmospheric quiescence (no convection) is described by $\statev=[0,0,0]^\adj$. Three nondimensional numbers define the parameter space: + $\pr$, the Prandtl number, which is the ratio of kinematic viscosity to thermal diffusivity + $\lrayleigh$, which is the ratio of the Rayleigh number $\rayleigh$ to the critical value of the Rayleigh number $\rayleigh_c$ (in convection parlance, $\lrayleigh$ tells you how many times supercritical the system is) + $b$, which is a geometrical factor (Note that time has been non-dimensionalized using the thermal diffusion timescale as the timescale of reference.) Original integration by Lorenz We first stick to Lorenz's original choice of parameters and initial condition, and pick$$ \begin{eqnarray} \pr&=&10, \label{eq:lorpar1}\\ \lrayleigh&=&28, \label{eq:lorpar2}\\ b&=&8/3. \label{eq:lorpar3} \end{eqnarray}$$ The equations are integrated numerically using a standard explicit integration scheme, known as the explicit Runge-Kutta scheme of order 4 (aka RK4), using the ode solver that comes with scipy (see the `forwardModel.py` python script) . This means in practice that the time axis is discretized, being divided into segments of width $\tstep$; 4th order accuracy means that the error characterizing this numerical approximation is proportional to $\tstep ^4$. The piece of code below allows you to run the model and visualize the time evolution of $X$, $Y$ and $Z$. You can increase the value of the total integration time $T$ to see the long-term evolution of the solution.
###Code
## Uncomment this line to make it interactive in JupyterLab!
# %matplotlib widget
import sys
import numpy as np
import forwardModel as fw
import matplotlib.pyplot as plt
# Getting familiar with Lorenz' 1963 model
# Control parameters
rayleigh = 28 # Value of the Rayleigh number ratio
prandtl = 10.
b = 8./3.
#integration time parameters
dt = 1.e-3 # This is the time step size
T = 30. # Total integration time (in nondimensional time)
n_steps = int( np.ceil( T / dt) )
time = np.linspace(0., T, n_steps + 1, endpoint=True) # array of discrete times
#initial condition
x0 = np.array( [0., 1., 0.], dtype=float )
#numerical integration given initial conditions and control parameters
x = fw.forwardModel_r( x0, time, rayleigh, prandtl, b)
#plot result
fig, ax = plt.subplots(nrows=3, ncols=1, sharex=True)
for k, comp in enumerate (["X","Y","Z"]):
ax[k].plot(time, x[k,:], label='L63 - '+comp)
ax[k].set_ylabel(comp)
ax[k].legend()
ax[-1].set_xlabel('Time')
ax[-1].set_xlim(time[0],time[-1])
plt.show()
###Output
_____no_output_____
###Markdown
The Butterfly effect: sensitivity to a slight change in parameters or initial conditionAny change in the initial condition or control parameter will result in a solution that will diverge from the reference solution - this is the Butterfly effect, a hallmark of chaotic dynamical systems that makes long-term prediction of the dynamics of such systems impossible. Below you can see how a change in the initial condition, by an amount $\epsilon_{ic}$ affects the solution. In the code, the change affects the $Y$ variable; feel free to apply it to the other variables, and also to vary its magnitude. Likewise, let $\epsilon_{par}$ be the amount by which the parameter $\lrayleigh$ is changed. You can also visualize how this impacts the solution and play with its magnitude as well.
###Code
#Sensitivity to initial condition
#Change initial condition and compare solutions
epsilon_ic = 2.e-1 # feel free to change this value
x0 = np.array( [0., 1.+epsilon_ic, 0.], dtype=float )
x2 = fw.forwardModel_r( x0, time, rayleigh, prandtl, b)
#Sensitivity to control parameters
epsilon_par = .1
x0 = np.array( [0., 1, 0.], dtype=float )
x3 = fw.forwardModel_r( x0, time, rayleigh+epsilon_par, prandtl, b)
#Plot all three trajectories
fig, ax = plt.subplots(nrows=3, ncols=1, sharex=True)
for k, comp in enumerate (["X","Y","Z"]):
ax[k].plot(time, x[k,:], label='L63')
ax[k].plot(time, x2[k,:], label="pert. IC,$\epsilon_{IC}$="+str(epsilon_ic))
ax[k].plot(time, x3[k,:], label="pert. $r$, $\epsilon_{Ra}$="+str(epsilon_par)) # effect of changing the value of one control parameter, Ra
ax[k].set_ylabel( comp )
ax[k].legend( loc='best')
ax[-1].set_xlabel('Time')
ax[-1].set_xlim(time[0],time[-1])
plt.show()
###Output
_____no_output_____ |
Model Evaluation/Model Evaluation.ipynb | ###Markdown
Lesson 1 - Training and Testing Models Code and quizzes**Training models in sklearn**
###Code
# lesson level imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Read the data
data = pd.read_csv('data_training.csv')
# Split the data into X and y
X = np.array(data[['x1', 'x2']])
y = np.array(data['y'])
# import statements for the classification algorithms
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
plt.scatter(data['x1'],data['x2'], c = data['y']);
plt.title('Just a Plot of the Data');
plt.xlabel('x1');
plt.ylabel('x2');
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X,y)
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(X,y)
from sklearn.svm import SVC
classifier = SVC()
classifier.fit(X,y)
#figure out how to plot the classifier result later.
###Output
_____no_output_____
###Markdown
**Turning Parameters Manually**Parameters used in SVC: - kernel (string): 'linear', 'poly', 'rbf'. - degree (integer): This is the degree of the polynomial kernel, if that's the kernel you picked (goes with poly kernel). - gamma (float): The gamma parameter (goes with rbf kernel). - C (float): The C parameter.
###Code
import pandas
import numpy
# Read the data
data = pandas.read_csv('data_tuning.csv')
# Split the data into X and y
X = numpy.array(data[['x1', 'x2']])
y = numpy.array(data['y'])
# Import the SVM Classifier
from sklearn.svm import SVC
# Play with different values for these, from the options above.
# Hit 'Test Run' to see how the classifier fit your data.
# Once you can correctly classify all the points, hit 'Submit'.
classifier = SVC(kernel = 'rbf', gamma = 200)
# Fit the classifier
classifier.fit(X,y)
###Output
_____no_output_____
###Markdown
Point of the exercise: it is not easy to fit these parameters manually.**Testing in sklearn**
###Code
# Reading the csv file
import pandas as pd
data = pd.read_csv("data_training.csv")
# Splitting the data into X and y
import numpy as np
X = np.array(data[['x1', 'x2']])
y = np.array(data['y'])
# Import statement for train_test_split
from sklearn.model_selection import train_test_split
#split the data into training and testing sets with 25% of the data for testing
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size = 0.25)
###Output
_____no_output_____
###Markdown
**Model Selection - Learning Curves**
###Code
# Import, read, and split data
import pandas as pd
data = pd.read_csv('data_lcurve.csv')
import numpy as np
import util
X = np.array(data[['x1', 'x2']])
y = np.array(data['y'])
# Fix random seed
np.random.seed(55)
### Imports
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
### Decision Tree
estimator = GradientBoostingClassifier()
util.draw_learning_curves(X,y,estimator,10)
## model is "just right" (of given options)
### Logistic Regression
estimator = LogisticRegression()
util.draw_learning_curves(X,y,estimator,10)
## model is high bias
### Support Vector Machine
estimator = SVC(kernel='rbf', gamma=1000)
util.draw_learning_curves(X,y,estimator,10)
##model is high variance
###Output
_____no_output_____ |
forest_version/07_Variational_Circuits.ipynb | ###Markdown
Current and near-term quantum computers suffer from imperfections, as we repeatedly pointed it out. This is why we cannot run long algorithms, that is, deep circuits on them. A new breed of algorithms started to appear since 2013 that focus on getting an advantage from imperfect quantum computers. The basic idea is extremely simple: run a short sequence of gates where some gates are parametrized. Then read out the result, make adjustments to the parameters on a classical computer, and repeat the calculation with the new parameters on the quantum hardware. This way we create an iterative loop between the quantum and the classical processing units, creating classical-quantum hybrid algorithms.These algorithms are also called variational to reflect the variational approach to changing the parameters. One of the most important example of this approach is the quantum approximate optimization algorithm, which is the subject of this notebook. Quantum approximate optimization algorithmThe quantum approximate optimization algorithm (QAOA) is a shallow-circuit variational algorithm for gate-model quantum computers that was inspired by quantum annealing. We discretize the adiabatic pathway in some $p$ steps, where $p$ influences precision. Each discrete time step $i$ has two parameters, $\beta_i, \gamma_i$. The classical variational algorithms does an optimization over these parameters based on the observed energy at the end of a run on the quantum hardware.More formally, we want to discretize the time-dependent $H(t)=(1-t)H_0 + tH_1$ under adiabatic conditions. We achieve this by Trotterizing the unitary. For instance, for time step $t_0$, we can split this unitary as $U(t_0) = U(H_0, \beta_0)U(H_1, \gamma_0)$. We can continue doing this for subsequent time steps, eventually splitting up the evolution to $p$ such chunks:$$U = U(H_0, \beta_0)U(H_1, \gamma_0)\ldots U(H_0, \beta_p)U(H_1, \gamma_p).$$At the end of optimizing the parameters, this discretized evolution will approximate the adiabatic pathway:The Hamiltonian $H_0$ is often referred to as the driving or mixing Hamiltonian, and $H_1$ as the cost Hamiltonian. The simplest mixing Hamiltonian is $H_0 = -\sum_i \sigma^X_i$, the same as the initial Hamiltonian in quantum annealing. By alternating between the two Hamiltonian, the mixing Hamiltonian drives the state towards an equal superposition, whereas the cost Hamiltonian tries to seek its own ground state.Let us import the necessary packages first:
###Code
import numpy as np
from functools import partial
from pyquil import Program, api
from pyquil.paulis import PauliSum, PauliTerm, exponential_map, sZ
from pyquil.gates import *
from scipy.optimize import minimize
from forest_tools import *
np.set_printoptions(precision=3, suppress=True)
qvm_server, quilc_server, fc = init_qvm_and_quilc()
n_qubits = 2
###Output
_____no_output_____
###Markdown
Now we can define our mixing Hamiltonian on some qubits. As in the notebook on classical and quantum many-body physics, we had to define, for instance, an `IZ` operator to express $\mathbb{I}\otimes\sigma_1^Z$, that is, the $\sigma_1^Z$ operator acting only on qubit 1. We can achieve the same effect the following way (this time using the Pauli-X operator). The coefficient here means the strength of the transverse field at the given qubit. This operator will act trivially on all qubits, except the given one. Let's define the mixing Hamiltonian over two qubits:
###Code
Hm = [PauliTerm("X", i, -1.0) for i in range(n_qubits)]
###Output
_____no_output_____
###Markdown
As an example, we will minimize the Ising problem defined by the cost Hamiltonian $H_c=-\sigma^Z_1 \otimes \sigma^Z_2$, whose minimum is reached whenever $\sigma^Z_1 = \sigma^Z_2$ (for the states $|-1, -1\rangle$, $|11\rangle$ or any superposition of both)
###Code
J = np.array([[0,1],[0,0]]) # weight matrix of the Ising model. Only the coefficient (0,1) is non-zero.
Hc = []
for i in range(n_qubits):
for j in range(n_qubits):
Hc.append(PauliTerm("Z", i, -J[i, j]) * PauliTerm("Z", j, 1.0))
###Output
_____no_output_____
###Markdown
During the iterative procedure, we will need to compute $e^{-i \beta H_c}$ and $e^{-i \gamma H_m}$. Using the function `exponential_map` of PyQuil, we can build two functions that take respectively $\beta$ and $\gamma$ and return $e^{-i \beta H_c}$ and $e^{-i \gamma H_m}$
###Code
exp_Hm = []
exp_Hc = []
for term in Hm:
exp_Hm.append(exponential_map(term))
for term in Hc:
exp_Hc.append(exponential_map(term))
###Output
_____no_output_____
###Markdown
We set the number of time evolution steps $p=1$ and initialize the $\gamma_i$ and $\nu_i$ parameters:
###Code
p = 1
β = np.random.uniform(0, np.pi*2, p)
γ = np.random.uniform(0, np.pi*2, p)
###Output
_____no_output_____
###Markdown
The initial state is a uniform superposition of all the states $|q_1,...,q_n\rangle$. It can be created using Hadamard gates on all the qubits |0> of an new program.
###Code
initial_state = Program()
for i in range(n_qubits):
initial_state += H(i)
###Output
_____no_output_____
###Markdown
To create the circuit, we need to compose the different unitary matrice given by `evolve`.
###Code
def create_circuit(β, γ):
circuit = Program()
circuit += initial_state
for i in range(p):
for term_exp_Hc in exp_Hc:
circuit += term_exp_Hc(-β[i])
for term_exp_Hm in exp_Hm:
circuit += term_exp_Hm(-γ[i])
return circuit
###Output
_____no_output_____
###Markdown
We now create a function `evaluate_circuit` that takes a single vector `beta_gamma` (the concatenation of $\beta$ and $\gamma$) and returns $\langle H_c \rangle = \langle \psi | H_c | \psi \rangle$ where $\psi$ is defined by the circuit created with the function above.
###Code
def evaluate_circuit(beta_gamma):
β = beta_gamma[:p]
γ = beta_gamma[p:]
circuit = create_circuit(β, γ)
return qvm.pauli_expectation(circuit, sum(Hc))
###Output
_____no_output_____
###Markdown
Finally, we optimize the angles:
###Code
qvm = api.QVMConnection(endpoint=fc.sync_endpoint, compiler_endpoint=fc.compiler_endpoint)
result = minimize(evaluate_circuit, np.concatenate([β, γ]), method='L-BFGS-B')
result
###Output
_____no_output_____
###Markdown
Analysis of the resultsWe create a circuit using the optimal parameters found.
###Code
circuit = create_circuit(result['x'][:p], result['x'][p:])
###Output
_____no_output_____
###Markdown
We use the `statevector_simulator` backend in order to display the state created by the circuit.
###Code
wf_sim = api.WavefunctionSimulator(connection=fc)
state = wf_sim.wavefunction(circuit)
print(state)
###Output
_____no_output_____
###Markdown
We see that the state is approximately $(0.5 + 0.5i) \left( |00 \rangle + |11 \rangle \right) = e^{i \theta} \frac{1}{\sqrt{2}} \left( |00 \rangle + |11 \rangle \right)$, where $\theta$ is a phase factor that doesn't change the probabilities. It corresponds to a uniform superposition of the two solutions of the classicial problem: $(\sigma_1=1$, $\sigma_2=1)$ and $(\sigma_1=-1$, $\sigma_2=-1)$ Let's now try to evaluate the operators $\sigma^Z_1$ and $\sigma^Z_2$ independently:
###Code
print(qvm.pauli_expectation(circuit, PauliSum([sZ(0)])))
print(qvm.pauli_expectation(circuit, PauliSum([sZ(1)])))
###Output
_____no_output_____
###Markdown
Current and near-term quantum computers suffer from imperfections, as we repeatedly pointed it out. This is why we cannot run long algorithms, that is, deep circuits on them. A new breed of algorithms started to appear since 2013 that focus on getting an advantage from imperfect quantum computers. The basic idea is extremely simple: run a short sequence of gates where some gates are parametrized. Then read out the result, make adjustments to the parameters on a classical computer, and repeat the calculation with the new parameters on the quantum hardware. This way we create an iterative loop between the quantum and the classical processing units, creating classical-quantum hybrid algorithms.These algorithms are also called variational to reflect the variational approach to changing the parameters. One of the most important example of this approach is the quantum approximate optimization algorithm, which is the subject of this notebook. Quantum approximate optimization algorithmThe quantum approximate optimization algorithm (QAOA) is shallow-circuit variational algorithm for gate-model quantum computers that was inspired by quantum annealing. We discretize the adiabatic pathway in some $p$ steps, where $p$ influences precision. Each discrete time step $i$ has two parameters, $\beta_i, \gamma_i$. The classical variational algorithms does an optimization over these parameters based on the observed energy at the end of a run on the quantum hardware.More formally, we want to discretize the time-dependent $H(t)=(1-t)H_0 + tH_1$ under adiabatic conditions. We achieve this by Trotterizing the unitary. For instance, for time step $t_0$, we can split this unitary as $U(t_0) = U(H_0, \beta_0)U(H_1, \gamma_0)$. We can continue doing this for subsequent time steps, eventually splitting up the evolution to $p$ such chunks:$$U = U(H_0, \beta_0)U(H_1, \gamma_0)\ldots U(H_0, \beta_p)U(H_1, \gamma_p).$$At the end of optimizing the parameters, this discretized evolution will approximate the adiabatic pathway:The Hamiltonian $H_0$ is often referred to as the driving or mixing Hamiltonian, and $H_1$ as the cost Hamiltonian. The simplest mixing Hamiltonian is $H_0 = -\sum_i \sigma^X_i$, the same as the initial Hamiltonian in quantum annealing. By alternating between the two Hamiltonian, the mixing Hamiltonian drives the state towards and equal superposition, whereas the cost Hamiltonian tries to seek its own ground state.Let us import the necessary packages first:
###Code
import numpy as np
from functools import partial
from pyquil import Program, api
from pyquil.paulis import PauliSum, PauliTerm, exponential_map, sZ
from pyquil.gates import *
from scipy.optimize import minimize
from forest_tools import *
np.set_printoptions(precision=3, suppress=True)
qvm_server, quilc_server, fc = init_qvm_and_quilc()
n_qubits = 2
###Output
_____no_output_____
###Markdown
Now we can define our mixing Hamiltonian on some qubits. As in the notebook on classical and quantum many-body physics, we had to define, for instance, an `IZ` operator to express $\mathbb{I}\otimes\sigma_1^Z$, that is, the $\sigma_1^Z$ operator acting only on qubit 1. We can achieve the same effect the following way (this time using the Pauli-X operator). The coefficient here means the strength of the transverse field at the given qubit. This operator will act trivially on all qubits, except the given one. Let's define the mixing Hamiltonian over two qubits:
###Code
Hm = [PauliTerm("X", i, 1.0) for i in range(n_qubits)]
###Output
_____no_output_____
###Markdown
As an example, we will minimize the Ising problem defined by the cost Hamiltonian $H_c=-\sigma^Z_1 \otimes \sigma^Z_2$, whose minimum is reached whenever $\sigma^Z_1 = \sigma^Z_2$ (for the states $|-1, -1\rangle$, $|11\rangle$ or any superposition of both)
###Code
J = np.array([[0,1],[0,0]]) # weight matrix of the Ising model. Only the coefficient (0,1) is non-zero.
Hc = []
for i in range(n_qubits):
for j in range(n_qubits):
Hc.append(PauliTerm("Z", i, -J[i, j]) * PauliTerm("Z", j, 1.0))
###Output
_____no_output_____
###Markdown
During the iterative procedure, we will need to compute $e^{-i \beta H_c}$ and $e^{-i \gamma H_m}$. Using the function `exponential_map` of PyQuil, we can build two functions that take respectively $\beta$ and $\gamma$ and return $e^{-i \beta H_c}$ and $e^{-i \gamma H_m}$
###Code
exp_Hm = []
exp_Hc = []
for term in Hm:
exp_Hm.append(exponential_map(term))
for term in Hc:
exp_Hc.append(exponential_map(term))
###Output
_____no_output_____
###Markdown
We set $p=2$ and initialize the $\gamma_i$ and $\nu_i$ parameters:
###Code
n_iter = 10 # number of iterations of the optimization procedure
p = 1
β = np.random.uniform(0, np.pi*2, p)
γ = np.random.uniform(0, np.pi*2, p)
###Output
_____no_output_____
###Markdown
The initial state is a uniform superposition of all the states $|q_1,...,q_n\rangle$. It can be created using Hadamard gates on all the qubits |0> of an new program.
###Code
initial_state = Program()
for i in range(n_qubits):
initial_state += H(i)
###Output
_____no_output_____
###Markdown
To create the circuit, we need to compose the different unitary matrice given by `evolve`.
###Code
def create_circuit(β, γ):
circuit = Program()
circuit += initial_state
for i in range(p):
for term_exp_Hc in exp_Hc:
circuit += term_exp_Hc(-β[i])
for term_exp_Hm in exp_Hm:
circuit += term_exp_Hm(-γ[i])
return circuit
###Output
_____no_output_____
###Markdown
We now create a function `evaluate_circuit` that takes a single vector `beta_gamma` (the concatenation of $\beta$ and $\gamma$) and returns $\langle H_c \rangle = \langle \psi | H_c | \psi \rangle$ where $\psi$ is defined by the circuit created with the function above.
###Code
def evaluate_circuit(beta_gamma):
β = beta_gamma[:p]
γ = beta_gamma[p:]
circuit = create_circuit(β, γ)
return qvm.pauli_expectation(circuit, sum(Hc))
###Output
_____no_output_____
###Markdown
Finally, we optimize the angles:
###Code
qvm = api.QVMConnection(endpoint=fc.sync_endpoint, compiler_endpoint=fc.compiler_endpoint)
result = minimize(evaluate_circuit, np.concatenate([β, γ]), method='L-BFGS-B')
result
###Output
_____no_output_____
###Markdown
Analysis of the resultsWe create a circuit using the optimal parameters found.
###Code
circuit = create_circuit(result['x'][:p], result['x'][p:])
###Output
_____no_output_____
###Markdown
We use the `statevector_simulator` backend in order to display the state created by the circuit.
###Code
wf_sim = api.WavefunctionSimulator(connection=fc)
state = wf_sim.wavefunction(circuit)
print(state)
###Output
_____no_output_____
###Markdown
We see that the state is approximately $(0.5 - 0.5i) \left( |00 \rangle + |11 \rangle \right) = e^{i \theta} \frac{1}{\sqrt{2}} \left( |00 \rangle + |11 \rangle \right)$, where $\theta$ is a phase factor that doesn't change the probabilities. It corresponds to a uniform superposition of the two solutions of the classicial problem: $(\sigma_1=1$, $\sigma_2=1)$ and $(\sigma_1=-1$, $\sigma_2=-1)$ Let's now try to evaluate the operators $\sigma^Z_1$ and $\sigma^Z_2$ independently:
###Code
print(qvm.pauli_expectation(circuit, PauliSum([sZ(0)])))
print(qvm.pauli_expectation(circuit, PauliSum([sZ(1)])))
###Output
_____no_output_____
###Markdown
Current and near-term quantum computers suffer from imperfections, as we repeatedly pointed it out. This is why we cannot run long algorithms, that is, deep circuits on them. A new breed of algorithms started to appear since 2013 that focus on getting an advantage from imperfect quantum computers. The basic idea is extremely simple: run a short sequence of gates where some gates are parametrized. Then read out the result, make adjustments to the parameters on a classical computer, and repeat the calculation with the new parameters on the quantum hardware. This way we create an iterative loop between the quantum and the classical processing units, creating classical-quantum hybrid algorithms.These algorithms are also called variational to reflect the variational approach to changing the parameters. One of the most important examples of this approach is the quantum approximate optimization algorithm, which is the subject of this notebook. Quantum approximate optimization algorithmThe quantum approximate optimization algorithm (QAOA) is a shallow-circuit variational algorithm for gate-model quantum computers that was inspired by quantum annealing. We discretize the adiabatic pathway in some $p$ steps, where $p$ influences precision. Each discrete time step $i$ has two parameters, $\beta_i, \gamma_i$. The classical variational algorithms does an optimization over these parameters based on the observed energy at the end of a run on the quantum hardware.More formally, we want to discretize the time-dependent $H(t)=(1-t)H_0 + tH_1$ under adiabatic conditions. We achieve this by Trotterizing the unitary. For instance, for time step $t_0$, we can split this unitary as $U(t_0) = U(H_0, \beta_0)U(H_1, \gamma_0)$. We can continue doing this for subsequent time steps, eventually splitting up the evolution to $p$ such chunks:$$U = U(H_0, \beta_0)U(H_1, \gamma_0)\ldots U(H_0, \beta_p)U(H_1, \gamma_p).$$At the end of optimizing the parameters, this discretized evolution will approximate the adiabatic pathway:The Hamiltonian $H_0$ is often referred to as the driving or mixing Hamiltonian, and $H_1$ as the cost Hamiltonian. The simplest mixing Hamiltonian is $H_0 = -\sum_i \sigma^X_i$, the same as the initial Hamiltonian in quantum annealing. By alternating between the two Hamiltonians, the mixing Hamiltonian drives the state towards an equal superposition, whereas the cost Hamiltonian tries to seek its own ground state.Let us import the necessary packages first:
###Code
import numpy as np
from functools import partial
from pyquil import Program, api
from pyquil.paulis import PauliSum, PauliTerm, exponential_map, sZ
from pyquil.gates import *
from scipy.optimize import minimize
from forest_tools import *
np.set_printoptions(precision=3, suppress=True)
qvm_server, quilc_server, fc = init_qvm_and_quilc()
n_qubits = 2
###Output
_____no_output_____
###Markdown
Now we can define our mixing Hamiltonian on some qubits. As in the notebook on classical and quantum many-body physics, we had to define, for instance, an `IZ` operator to express $\mathbb{I}\otimes\sigma_1^Z$, that is, the $\sigma_1^Z$ operator acting only on qubit 1. We can achieve the same effect the following way (this time using the Pauli-X operator). The coefficient here means the strength of the transverse field at the given qubit. This operator will act trivially on all qubits, except the given one. Let's define the mixing Hamiltonian over two qubits:
###Code
Hm = [PauliTerm("X", i, -1.0) for i in range(n_qubits)]
###Output
_____no_output_____
###Markdown
As an example, we will minimize the Ising problem defined by the cost Hamiltonian $H_c=-\sigma^Z_1 \otimes \sigma^Z_2$, whose minimum is reached whenever $\sigma^Z_1 = \sigma^Z_2$ (for the states $|-1, -1\rangle$, $|11\rangle$ or any superposition of both)
###Code
J = np.array([[0,1],[0,0]]) # weight matrix of the Ising model. Only the coefficient (0,1) is non-zero.
Hc = []
for i in range(n_qubits):
for j in range(n_qubits):
Hc.append(PauliTerm("Z", i, -J[i, j]) * PauliTerm("Z", j, 1.0))
###Output
_____no_output_____
###Markdown
During the iterative procedure, we will need to compute $e^{-i \beta H_c}$ and $e^{-i \gamma H_m}$. Using the function `exponential_map` of PyQuil, we can build two functions that take respectively $\beta$ and $\gamma$ and return $e^{-i \beta H_c}$ and $e^{-i \gamma H_m}$
###Code
exp_Hm = []
exp_Hc = []
for term in Hm:
exp_Hm.append(exponential_map(term))
for term in Hc:
exp_Hc.append(exponential_map(term))
###Output
_____no_output_____
###Markdown
We set the number of time evolution steps $p=1$ and initialize the $\gamma_i$ and $\nu_i$ parameters:
###Code
p = 1
β = np.random.uniform(0, np.pi*2, p)
γ = np.random.uniform(0, np.pi*2, p)
###Output
_____no_output_____
###Markdown
The initial state is a uniform superposition of all the states $|q_1,...,q_n\rangle$. It can be created using Hadamard gates on all the qubits |0> of an new program.
###Code
initial_state = Program()
for i in range(n_qubits):
initial_state += H(i)
###Output
_____no_output_____
###Markdown
To create the circuit, we need to compose the different unitary matrice given by `evolve`.
###Code
def create_circuit(β, γ):
circuit = Program()
circuit += initial_state
for i in range(p):
for term_exp_Hc in exp_Hc:
circuit += term_exp_Hc(-β[i])
for term_exp_Hm in exp_Hm:
circuit += term_exp_Hm(-γ[i])
return circuit
###Output
_____no_output_____
###Markdown
We now create a function `evaluate_circuit` that takes a single vector `beta_gamma` (the concatenation of $\beta$ and $\gamma$) and returns $\langle H_c \rangle = \langle \psi | H_c | \psi \rangle$ where $\psi$ is defined by the circuit created with the function above.
###Code
def evaluate_circuit(beta_gamma):
β = beta_gamma[:p]
γ = beta_gamma[p:]
circuit = create_circuit(β, γ)
return qvm.pauli_expectation(circuit, sum(Hc))
###Output
_____no_output_____
###Markdown
Finally, we optimize the angles:
###Code
qvm = api.QVMConnection(endpoint=fc.sync_endpoint, compiler_endpoint=fc.compiler_endpoint)
result = minimize(evaluate_circuit, np.concatenate([β, γ]), method='L-BFGS-B')
result
###Output
_____no_output_____
###Markdown
Analysis of the resultsWe create a circuit using the optimal parameters found.
###Code
circuit = create_circuit(result['x'][:p], result['x'][p:])
###Output
_____no_output_____
###Markdown
We use the `statevector_simulator` backend in order to display the state created by the circuit.
###Code
wf_sim = api.WavefunctionSimulator(connection=fc)
state = wf_sim.wavefunction(circuit)
print(state)
###Output
_____no_output_____
###Markdown
We see that the state is approximately $(0.5 + 0.5i) \left( |00 \rangle + |11 \rangle \right) = e^{i \theta} \frac{1}{\sqrt{2}} \left( |00 \rangle + |11 \rangle \right)$, where $\theta$ is a phase factor that doesn't change the probabilities. It corresponds to a uniform superposition of the two solutions of the classicial problem: $(\sigma_1=1$, $\sigma_2=1)$ and $(\sigma_1=-1$, $\sigma_2=-1)$ Let's now try to evaluate the operators $\sigma^Z_1$ and $\sigma^Z_2$ independently:
###Code
print(qvm.pauli_expectation(circuit, PauliSum([sZ(0)])))
print(qvm.pauli_expectation(circuit, PauliSum([sZ(1)])))
###Output
_____no_output_____ |
tutorials/single_table_data/02_CTGAN_Model.ipynb | ###Markdown
CTGAN Model===========In this guide we will go through a series of steps that will let youdiscover functionalities of the `CTGAN` model, including how to:- Create an instance of `CTGAN`.- Fit the instance to your data.- Generate synthetic versions of your data.- Use `CTGAN` to anonymize PII information.- Customize the data transformations to improve the learning process.- Specify hyperparameters to improve the output quality.What is CTGAN?--------------The `sdv.tabular.CTGAN` model is based on the GAN-based Deep Learningdata synthesizer which was presented at the NeurIPS 2020 conference bythe paper titled [Modeling Tabular data using ConditionalGAN](https://arxiv.org/abs/1907.00503).Let\'s now discover how to learn a dataset and later on generatesynthetic data with the same format and statistical properties by usingthe `CTGAN` class from SDV.Quick Usage-----------We will start by loading one of our demo datasets, the`student_placements`, which contains information about MBA students thatapplied for placements during the year 2020.**Warning**In order to follow this guide you need to have `ctgan` installed on yoursystem. If you have not done it yet, please install `ctgan` now byexecuting the command `pip install sdv` in a terminal.
###Code
from sdv.demo import load_tabular_demo
data = load_tabular_demo('student_placements')
data.head()
###Output
_____no_output_____
###Markdown
As you can see, this table contains information about students whichincludes, among other things:- Their id and gender- Their grades and specializations- Their work experience- The salary that they where offered- The duration and dates of their placementYou will notice that there is data with the following characteristics:- There are float, integer, boolean, categorical and datetime values.- There are some variables that have missing data. In particular, all the data related to the placement details is missing in the rows where the student was not placed.Let us use `CTGAN` to learn this data and then sample synthetic dataabout new students to see how well de model captures the characteristicsindicated above. In order to do this you will need to:- Import the `sdv.tabular.CTGAN` class and create an instance of it.- Call its `fit` method passing our table.- Call its `sample` method indicating the number of synthetic rows that you want to generate.
###Code
from sdv.tabular import CTGAN
model = CTGAN()
model.fit(data)
###Output
_____no_output_____
###Markdown
**Note**Notice that the model `fitting` process took care of transforming thedifferent fields using the appropriate [Reversible DataTransforms](http://github.com/sdv-dev/RDT) to ensure that the data has aformat that the underlying CTGANSynthesizer class can handle. Generate synthetic data from the modelOnce the modeling has finished you are ready to generate new syntheticdata by calling the `sample` method from your model passing the numberof rows that we want to generate.
###Code
new_data = model.sample(200)
###Output
_____no_output_____
###Markdown
This will return a table identical to the one which the model was fittedon, but filled with new data which resembles the original one.
###Code
new_data.head()
###Output
_____no_output_____
###Markdown
**Note**You can control the number of rows by specifying the number of `samples`in the `model.sample()`. To test, try `model.sample(10000)`.Note that the original table only had \~200 rows. Save and Load the modelIn many scenarios it will be convenient to generate synthetic versionsof your data directly in systems that do not have access to the originaldata source. For example, if you may want to generate testing data onthe fly inside a testing environment that does not have access to yourproduction database. In these scenarios, fitting the model with realdata every time that you need to generate new data is feasible, so youwill need to fit a model in your production environment, save the fittedmodel into a file, send this file to the testing environment and thenload it there to be able to `sample` from it.Let\'s see how this process works. Save and share the modelOnce you have fitted the model, all you need to do is call its `save`method passing the name of the file in which you want to save the model.Note that the extension of the filename is not relevant, but we will beusing the `.pkl` extension to highlight that the serialization protocolused is [pickle](https://docs.python.org/3/library/pickle.html).
###Code
model.save('my_model.pkl')
###Output
_____no_output_____
###Markdown
This will have created a file called `my_model.pkl` in the samedirectory in which you are running SDV.**Important**If you inspect the generated file you will notice that its size is muchsmaller than the size of the data that you used to generate it. This isbecause the serialized model contains **no information about theoriginal data**, other than the parameters it needs to generatesynthetic versions of it. This means that you can safely share this`my_model.pkl` file without the risc of disclosing any of your realdata! Load the model and generate new dataThe file you just generated can be send over to the system where thesynthetic data will be generated. Once it is there, you can load itusing the `CTGAN.load` method, and then you are ready to sample new datafrom the loaded instance:
###Code
loaded = CTGAN.load('my_model.pkl')
new_data = loaded.sample(200)
###Output
_____no_output_____
###Markdown
**Warning**Notice that the system where the model is loaded needs to also have`sdv` and `ctgan` installed, otherwise it will not be able to load themodel and use it. Specifying the Primary Key of the tableOne of the first things that you may have noticed when looking that demodata is that there is a `student_id` column which acts as the primarykey of the table, and which is supposed to have unique values. Indeed,if we look at the number of times that each value appears, we see thatall of them appear at most once:
###Code
data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
However, if we look at the synthetic data that we generated, we observethat there are some values that appear more than once:
###Code
new_data[new_data.student_id == new_data.student_id.value_counts().index[0]]
###Output
_____no_output_____
###Markdown
This happens because the model was not notified at any point about thefact that the `student_id` had to be unique, so when it generates newdata it will provoke collisions sooner or later. In order to solve this,we can pass the argument `primary_key` to our model when we create it,indicating the name of the column that is the index of the table.
###Code
model = CTGAN(
primary_key='student_id'
)
model.fit(data)
new_data = model.sample(200)
new_data.head()
###Output
_____no_output_____
###Markdown
As a result, the model will learn that this column must be unique andgenerate a unique sequence of values for the column:
###Code
new_data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
Anonymizing Personally Identifiable Information (PII)There will be many cases where the data will contain PersonallyIdentifiable Information which we cannot disclose. In these cases, wewill want our Tabular Models to replace the information within thesefields with fake, simulated data that looks similar to the real one butdoes not contain any of the original values.Let\'s load a new dataset that contains a PII field, the`student_placements_pii` demo, and try to generate synthetic versions ofit that do not contain any of the PII fields.**Note**The `student_placements_pii` dataset is a modified version of the`student_placements` dataset with one new field, `address`, whichcontains PII information about the students. Notice that this additional`address` field has been simulated and does not correspond to data fromthe real users.
###Code
data_pii = load_tabular_demo('student_placements_pii')
data_pii.head()
###Output
_____no_output_____
###Markdown
If we use our tabular model on this new data we will see how thesynthetic data that it generates discloses the addresses from the realstudents:
###Code
model = CTGAN(
primary_key='student_id',
)
model.fit(data_pii)
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
More specifically, we can see how all the addresses that have beengenerated actually come from the original dataset:
###Code
new_data_pii.address.isin(data_pii.address).sum()
###Output
_____no_output_____
###Markdown
In order to solve this, we can pass an additional argument`anonymize_fields` to our model when we create the instance. This`anonymize_fields` argument will need to be a dictionary that contains:- The name of the field that we want to anonymize.- The category of the field that we want to use when we generate fake values for it.The list complete list of possible categories can be seen in the [FakerProviders](https://faker.readthedocs.io/en/master/providers.html) page,and it contains a huge list of concepts such as:- name- address- country- city- ssn- credit_card_number- credit_card_expire- credit_card_security_code- email- telephone- \...In this case, since the field is an e-mail address, we will pass adictionary indicating the category `address`
###Code
model = CTGAN(
primary_key='student_id',
anonymize_fields={
'address': 'address'
}
)
model.fit(data_pii)
###Output
_____no_output_____
###Markdown
As a result, we can see how the real `address` values have been replacedby other fake addresses:
###Code
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
Which means that none of the original addresses can be found in thesampled data:
###Code
data_pii.address.isin(new_data_pii.address).sum()
###Output
_____no_output_____
###Markdown
Advanced Usage--------------Now that we have discovered the basics, let\'s go over a few moreadvanced usage examples and see the different arguments that we can passto our `CTGAN` Model in order to customize it to our needs. How to modify the CTGAN Hyperparameters?A part from the common Tabular Model arguments, `CTGAN` has a number ofadditional hyperparameters that control its learning behavior and canimpact on the performance of the model, both in terms of quality of thegenerated data and computational time.- `epochs` and `batch_size`: these arguments control the number of iterations that the model will perform to optimize its parameters, as well as the number of samples used in each step. Its default values are `300` and `500` respectively, and `batch_size` needs to always be a value which is multiple of `10`. These hyperparameters have a very direct effect in time the training process lasts but also on the performance of the data, so for new datasets, you might want to start by setting a low value on both of them to see how long the training process takes on your data and later on increase the number to acceptable values in order to improve the performance.- `log_frequency`: Whether to use log frequency of categorical levels in conditional sampling. It defaults to `True`. This argument affects how the model processes the frequencies of the categorical values that are used to condition the rest of the values. In some cases, changing it to `False` could lead to better performance.- `embedding_dim` (int): Size of the random sample passed to the Generator. Defaults to 128.- `generator_dim` (tuple or list of ints): Size of the output samples for each one of the Residuals. A Resiudal Layer will be created for each one of the values provided. Defaults to (256, 256).- `discriminator_dim` (tuple or list of ints): Size of the output samples for each one of the Discriminator Layers. A Linear Layer will be created for each one of the values provided. Defaults to (256, 256).- `generator_lr` (float): Learning rate for the generator. Defaults to 2e-4.- `generator_decay` (float): Generator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_lr` (float): Learning rate for the discriminator. Defaults to 2e-4.- `discriminator_decay` (float): Discriminator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_steps` (int): Number of discriminator updates to do for each generator update. From the WGAN paper: https://arxiv.org/abs/1701.07875. WGAN paper default is 5. Default used is 1 to match original CTGAN implementation.- `verbose`: Whether to print fit progress on stdout. Defaults to `False`.**Warning**Notice that the value that you set on the `batch_size` argument mustalways be a multiple of `10`!As an example, we will try to fit the `CTGAN` model slightly increasingthe number of epochs, reducing the `batch_size`, adding one additionallayer to the models involved and using a smaller wright decay.Before we start, we will evaluate the quality of the previouslygenerated data using the `sdv.evaluation.evaluate` function
###Code
from sdv.evaluation import evaluate
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
Afterwards, we create a new instance of the `CTGAN` model with thehyperparameter values that we want to use
###Code
model = CTGAN(
primary_key='student_id',
epochs=500,
batch_size=100,
generator_dim=(256, 256, 256),
discriminator_dim=(256, 256, 256)
)
###Output
_____no_output_____
###Markdown
And fit to our data.
###Code
model.fit(data)
###Output
_____no_output_____
###Markdown
Finally, we are ready to generate new data and evaluate the results.
###Code
new_data = model.sample(len(data))
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
CTGAN Model===========In this guide we will go through a series of steps that will let youdiscover functionalities of the `CTGAN` model, including how to:- Create an instance of `CTGAN`.- Fit the instance to your data.- Generate synthetic versions of your data.- Use `CTGAN` to anonymize PII information.- Specify hyperparameters to improve the output quality.What is CTGAN?--------------The `sdv.tabular.CTGAN` model is based on the GAN-based Deep Learningdata synthesizer which was presented at the NeurIPS 2020 conference bythe paper titled [Modeling Tabular data using ConditionalGAN](https://arxiv.org/abs/1907.00503).Let\'s now discover how to learn a dataset and later on generatesynthetic data with the same format and statistical properties by usingthe `CTGAN` class from SDV.Quick Usage-----------We will start by loading one of our demo datasets, the`student_placements`, which contains information about MBA students thatapplied for placements during the year 2020.**Warning**In order to follow this guide you need to have `ctgan` installed on yoursystem. If you have not done it yet, please install `ctgan` now byexecuting the command `pip install sdv` in a terminal.
###Code
from sdv.demo import load_tabular_demo
data = load_tabular_demo('student_placements')
data.head()
###Output
_____no_output_____
###Markdown
As you can see, this table contains information about students whichincludes, among other things:- Their id and gender- Their grades and specializations- Their work experience- The salary that they were offered- The duration and dates of their placementYou will notice that there is data with the following characteristics:- There are float, integer, boolean, categorical and datetime values.- There are some variables that have missing data. In particular, all the data related to the placement details is missing in the rows where the student was not placed.Let us use `CTGAN` to learn this data and then sample synthetic dataabout new students to see how well the model captures the characteristicsindicated above. In order to do this you will need to:- Import the `sdv.tabular.CTGAN` class and create an instance of it.- Call its `fit` method passing our table.- Call its `sample` method indicating the number of synthetic rows that you want to generate.
###Code
from sdv.tabular import CTGAN
model = CTGAN()
model.fit(data)
###Output
_____no_output_____
###Markdown
**Note**Notice that the model `fitting` process took care of transforming thedifferent fields using the appropriate [Reversible DataTransforms](http://github.com/sdv-dev/RDT) to ensure that the data has aformat that the underlying CTGANSynthesizer class can handle. Generate synthetic data from the modelOnce the modeling has finished you are ready to generate new syntheticdata by calling the `sample` method from your model passing the numberof rows that we want to generate. The number of rows (``num_rows``)is a required parameter.
###Code
new_data = model.sample(num_rows=200)
###Output
_____no_output_____
###Markdown
This will return a table identical to the one which the model was fittedon, but filled with new data which resembles the original one.
###Code
new_data.head()
###Output
_____no_output_____
###Markdown
**Note**There are a number of other parameters in this method that you can use tooptimize the process of generating synthetic data. Use ``output_file_path``to directly write results to a CSV file, ``batch_size`` to break up samplinginto smaller pieces & track their progress and ``randomize_samples`` todetermine whether to generate the same synthetic data every time.See the API Section for more details. Save and Load the modelIn many scenarios it will be convenient to generate synthetic versionsof your data directly in systems that do not have access to the originaldata source. For example, if you may want to generate testing data onthe fly inside a testing environment that does not have access to yourproduction database. In these scenarios, fitting the model with realdata every time that you need to generate new data is feasible, so youwill need to fit a model in your production environment, save the fittedmodel into a file, send this file to the testing environment and thenload it there to be able to `sample` from it.Let\'s see how this process works. Save and share the modelOnce you have fitted the model, all you need to do is call its `save`method passing the name of the file in which you want to save the model.Note that the extension of the filename is not relevant, but we will beusing the `.pkl` extension to highlight that the serialization protocolused is [pickle](https://docs.python.org/3/library/pickle.html).
###Code
model.save('my_model.pkl')
###Output
_____no_output_____
###Markdown
This will have created a file called `my_model.pkl` in the samedirectory in which you are running SDV.**Important**If you inspect the generated file you will notice that its size is muchsmaller than the size of the data that you used to generate it. This isbecause the serialized model contains **no information about theoriginal data**, other than the parameters it needs to generatesynthetic versions of it. This means that you can safely share this`my_model.pkl` file without the risc of disclosing any of your realdata! Load the model and generate new dataThe file you just generated can be sent over to the system where thesynthetic data will be generated. Once it is there, you can load itusing the `CTGAN.load` method, and then you are ready to sample new datafrom the loaded instance:
###Code
loaded = CTGAN.load('my_model.pkl')
new_data = loaded.sample(num_rows=200)
###Output
_____no_output_____
###Markdown
**Warning**Notice that the system where the model is loaded needs to also have`sdv` and `ctgan` installed, otherwise it will not be able to load themodel and use it. Specifying the Primary Key of the tableOne of the first things that you may have noticed when looking at the demodata is that there is a `student_id` column which acts as the primarykey of the table, and which is supposed to have unique values. Indeed,if we look at the number of times that each value appears, we see thatall of them appear at most once:
###Code
data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
However, if we look at the synthetic data that we generated, we observethat there are some values that appear more than once:
###Code
new_data[new_data.student_id == new_data.student_id.value_counts().index[0]]
###Output
_____no_output_____
###Markdown
This happens because the model was not notified at any point about thefact that the `student_id` had to be unique, so when it generates newdata it will provoke collisions sooner or later. In order to solve this,we can pass the argument `primary_key` to our model when we create it,indicating the name of the column that is the index of the table.
###Code
model = CTGAN(
primary_key='student_id'
)
model.fit(data)
new_data = model.sample(200)
new_data.head()
###Output
_____no_output_____
###Markdown
As a result, the model will learn that this column must be unique andgenerate a unique sequence of values for the column:
###Code
new_data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
Anonymizing Personally Identifiable Information (PII)There will be many cases where the data will contain PersonallyIdentifiable Information which we cannot disclose. In these cases, wewill want our Tabular Models to replace the information within thesefields with fake, simulated data that looks similar to the real one butdoes not contain any of the original values.Let\'s load a new dataset that contains a PII field, the`student_placements_pii` demo, and try to generate synthetic versions ofit that do not contain any of the PII fields.**Note**The `student_placements_pii` dataset is a modified version of the`student_placements` dataset with one new field, `address`, whichcontains PII information about the students. Notice that this additional`address` field has been simulated and does not correspond to data fromthe real users.
###Code
data_pii = load_tabular_demo('student_placements_pii')
data_pii.head()
###Output
_____no_output_____
###Markdown
If we use our tabular model on this new data we will see how thesynthetic data that it generates discloses the addresses from the realstudents:
###Code
model = CTGAN(
primary_key='student_id',
)
model.fit(data_pii)
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
More specifically, we can see how all the addresses that have beengenerated actually come from the original dataset:
###Code
new_data_pii.address.isin(data_pii.address).sum()
###Output
_____no_output_____
###Markdown
In order to solve this, we can pass an additional argument`anonymize_fields` to our model when we create the instance. This`anonymize_fields` argument will need to be a dictionary that contains:- The name of the field that we want to anonymize.- The category of the field that we want to use when we generate fake values for it.The list complete list of possible categories can be seen in the [FakerProviders](https://faker.readthedocs.io/en/master/providers.html) page,and it contains a huge list of concepts such as:- name- address- country- city- ssn- credit_card_number- credit_card_expire- credit_card_security_code- email- telephone- \...In this case, since the field is an address, we will pass adictionary indicating the category `address`
###Code
model = CTGAN(
primary_key='student_id',
anonymize_fields={
'address': 'address'
}
)
model.fit(data_pii)
###Output
_____no_output_____
###Markdown
As a result, we can see how the real `address` values have been replacedby other fake addresses:
###Code
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
Which means that none of the original addresses can be found in thesampled data:
###Code
data_pii.address.isin(new_data_pii.address).sum()
###Output
_____no_output_____
###Markdown
Advanced Usage--------------Now that we have discovered the basics, let\'s go over a few moreadvanced usage examples and see the different arguments that we can passto our `CTGAN` Model in order to customize it to our needs. How to modify the CTGAN Hyperparameters?A part from the common Tabular Model arguments, `CTGAN` has a number ofadditional hyperparameters that control its learning behavior and canimpact on the performance of the model, both in terms of quality of thegenerated data and computational time.- `epochs` and `batch_size`: these arguments control the number of iterations that the model will perform to optimize its parameters, as well as the number of samples used in each step. Its default values are `300` and `500` respectively, and `batch_size` needs to always be a value which is multiple of `10`. These hyperparameters have a very direct effect in time the training process lasts but also on the performance of the data, so for new datasets, you might want to start by setting a low value on both of them to see how long the training process takes on your data and later on increase the number to acceptable values in order to improve the performance.- `log_frequency`: Whether to use log frequency of categorical levels in conditional sampling. It defaults to `True`. This argument affects how the model processes the frequencies of the categorical values that are used to condition the rest of the values. In some cases, changing it to `False` could lead to better performance.- `embedding_dim` (int): Size of the random sample passed to the Generator. Defaults to 128.- `generator_dim` (tuple or list of ints): Size of the output samples for each one of the Residuals. A Resiudal Layer will be created for each one of the values provided. Defaults to (256, 256).- `discriminator_dim` (tuple or list of ints): Size of the output samples for each one of the Discriminator Layers. A Linear Layer will be created for each one of the values provided. Defaults to (256, 256).- `generator_lr` (float): Learning rate for the generator. Defaults to 2e-4.- `generator_decay` (float): Generator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_lr` (float): Learning rate for the discriminator. Defaults to 2e-4.- `discriminator_decay` (float): Discriminator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_steps` (int): Number of discriminator updates to do for each generator update. From the WGAN paper: https://arxiv.org/abs/1701.07875. WGAN paper default is 5. Default used is 1 to match original CTGAN implementation.- `verbose`: Whether to print fit progress on stdout. Defaults to `False`.**Warning**Notice that the value that you set on the `batch_size` argument mustalways be a multiple of `10`!As an example, we will try to fit the `CTGAN` model slightly increasingthe number of epochs, reducing the `batch_size`, adding one additionallayer to the models involved and using a smaller wright decay.Before we start, we will evaluate the quality of the previouslygenerated data using the `sdv.evaluation.evaluate` function
###Code
from sdv.evaluation import evaluate
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
Afterwards, we create a new instance of the `CTGAN` model with thehyperparameter values that we want to use
###Code
model = CTGAN(
primary_key='student_id',
epochs=500,
batch_size=100,
generator_dim=(256, 256, 256),
discriminator_dim=(256, 256, 256)
)
###Output
_____no_output_____
###Markdown
And fit to our data.
###Code
model.fit(data)
###Output
_____no_output_____
###Markdown
Finally, we are ready to generate new data and evaluate the results.
###Code
new_data = model.sample(len(data))
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
As we can see, in this case these modifications changed the obtainedresults slightly, but they did neither introduce dramatic changes in theperformance. Conditional SamplingAs the name implies, conditional sampling allows us to sample from a conditional distribution using the `CTGAN` model, which means we can generate only values that satisfy certain conditions. These conditional values can be passed to the `sample_conditions` method as a list of `sdv.sampling.Condition` objects or to the `sample_remaining_columns` method as a dataframe. When specifying a `sdv.sampling.Condition` object, we can pass in the desired conditions as a dictionary, as well as specify the number of desired rows for that condition.
###Code
from sdv.sampling import Condition
condition = Condition({
'gender': 'M'
}, num_rows=5)
model.sample_conditions(conditions=[condition])
###Output
_____no_output_____
###Markdown
It's also possible to condition on multiple columns, such as `gender = M, 'experience_years': 0`.
###Code
condition = Condition({
'gender': 'M',
'experience_years': 0
}, num_rows=5)
model.sample_conditions(conditions=[condition])
###Output
_____no_output_____
###Markdown
In the `sample_remaining_columns` method, `conditions` is passed as a dataframe. In that case, the model will generate one sample for each row of the dataframe, sorted in the same order. Since the model already knows how many samples to generate, passing it as a parameter is unnecessary. For example, if we want to generate three samples where `gender = M` and three samples with `gender = F`, all of them with `work_experience = True`, we can do the following:
###Code
import pandas as pd
conditions = pd.DataFrame({
'gender': ['M', 'M', 'M', 'F', 'F', 'F'],
'work_experience': [True, True, True, True, True, True]
})
model.sample_remaining_columns(conditions)
###Output
_____no_output_____
###Markdown
`CTGAN` also supports conditioning on continuous values, as long as the values are within the range of seen numbers. For example, if all the values of the dataset are within 0 and 1, `CTGAN` will not be able to set this value to 1000.
###Code
condition = Condition({
'degree_perc': 70.0
}, num_rows=5)
model.sample_conditions(conditions=[condition])
###Output
_____no_output_____
###Markdown
CTGAN Model===========In this guide we will go through a series of steps that will let youdiscover functionalities of the `CTGAN` model, including how to:- Create an instance of `CTGAN`.- Fit the instance to your data.- Generate synthetic versions of your data.- Use `CTGAN` to anonymize PII information.- Customize the data transformations to improve the learning process.- Specify hyperparameters to improve the output quality.What is CTGAN?--------------The `sdv.tabular.CTGAN` model is based on the GAN-based Deep Learningdata synthesizer which was presented at the NeurIPS 2020 conference bythe paper titled [Modeling Tabular data using ConditionalGAN](https://arxiv.org/abs/1907.00503).Let\'s now discover how to learn a dataset and later on generatesynthetic data with the same format and statistical properties by usingthe `CTGAN` class from SDV.Quick Usage-----------We will start by loading one of our demo datasets, the`student_placements`, which contains information about MBA students thatapplied for placements during the year 2020.**Warning**In order to follow this guide you need to have `ctgan` installed on yoursystem. If you have not done it yet, please install `ctgan` now byexecuting the command `pip install sdv` in a terminal.
###Code
from sdv.demo import load_tabular_demo
data = load_tabular_demo('student_placements')
data.head()
###Output
_____no_output_____
###Markdown
As you can see, this table contains information about students whichincludes, among other things:- Their id and gender- Their grades and specializations- Their work experience- The salary that they were offered- The duration and dates of their placementYou will notice that there is data with the following characteristics:- There are float, integer, boolean, categorical and datetime values.- There are some variables that have missing data. In particular, all the data related to the placement details is missing in the rows where the student was not placed.Let us use `CTGAN` to learn this data and then sample synthetic dataabout new students to see how well the model captures the characteristicsindicated above. In order to do this you will need to:- Import the `sdv.tabular.CTGAN` class and create an instance of it.- Call its `fit` method passing our table.- Call its `sample` method indicating the number of synthetic rows that you want to generate.
###Code
from sdv.tabular import CTGAN
model = CTGAN()
model.fit(data)
###Output
_____no_output_____
###Markdown
**Note**Notice that the model `fitting` process took care of transforming thedifferent fields using the appropriate [Reversible DataTransforms](http://github.com/sdv-dev/RDT) to ensure that the data has aformat that the underlying CTGANSynthesizer class can handle. Generate synthetic data from the modelOnce the modeling has finished you are ready to generate new syntheticdata by calling the `sample` method from your model passing the numberof rows that we want to generate.
###Code
new_data = model.sample(200)
###Output
_____no_output_____
###Markdown
This will return a table identical to the one which the model was fittedon, but filled with new data which resembles the original one.
###Code
new_data.head()
###Output
_____no_output_____
###Markdown
**Note**You can control the number of rows by specifying the number of `samples`in the `model.sample()`. To test, try `model.sample(10000)`.Note that the original table only had \~200 rows. Save and Load the modelIn many scenarios it will be convenient to generate synthetic versionsof your data directly in systems that do not have access to the originaldata source. For example, if you may want to generate testing data onthe fly inside a testing environment that does not have access to yourproduction database. In these scenarios, fitting the model with realdata every time that you need to generate new data is feasible, so youwill need to fit a model in your production environment, save the fittedmodel into a file, send this file to the testing environment and thenload it there to be able to `sample` from it.Let\'s see how this process works. Save and share the modelOnce you have fitted the model, all you need to do is call its `save`method passing the name of the file in which you want to save the model.Note that the extension of the filename is not relevant, but we will beusing the `.pkl` extension to highlight that the serialization protocolused is [pickle](https://docs.python.org/3/library/pickle.html).
###Code
model.save('my_model.pkl')
###Output
_____no_output_____
###Markdown
This will have created a file called `my_model.pkl` in the samedirectory in which you are running SDV.**Important**If you inspect the generated file you will notice that its size is muchsmaller than the size of the data that you used to generate it. This isbecause the serialized model contains **no information about theoriginal data**, other than the parameters it needs to generatesynthetic versions of it. This means that you can safely share this`my_model.pkl` file without the risc of disclosing any of your realdata! Load the model and generate new dataThe file you just generated can be sent over to the system where thesynthetic data will be generated. Once it is there, you can load itusing the `CTGAN.load` method, and then you are ready to sample new datafrom the loaded instance:
###Code
loaded = CTGAN.load('my_model.pkl')
new_data = loaded.sample(200)
###Output
_____no_output_____
###Markdown
**Warning**Notice that the system where the model is loaded needs to also have`sdv` and `ctgan` installed, otherwise it will not be able to load themodel and use it. Specifying the Primary Key of the tableOne of the first things that you may have noticed when looking at the demodata is that there is a `student_id` column which acts as the primarykey of the table, and which is supposed to have unique values. Indeed,if we look at the number of times that each value appears, we see thatall of them appear at most once:
###Code
data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
However, if we look at the synthetic data that we generated, we observethat there are some values that appear more than once:
###Code
new_data[new_data.student_id == new_data.student_id.value_counts().index[0]]
###Output
_____no_output_____
###Markdown
This happens because the model was not notified at any point about thefact that the `student_id` had to be unique, so when it generates newdata it will provoke collisions sooner or later. In order to solve this,we can pass the argument `primary_key` to our model when we create it,indicating the name of the column that is the index of the table.
###Code
model = CTGAN(
primary_key='student_id'
)
model.fit(data)
new_data = model.sample(200)
new_data.head()
###Output
_____no_output_____
###Markdown
As a result, the model will learn that this column must be unique andgenerate a unique sequence of values for the column:
###Code
new_data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
Anonymizing Personally Identifiable Information (PII)There will be many cases where the data will contain PersonallyIdentifiable Information which we cannot disclose. In these cases, wewill want our Tabular Models to replace the information within thesefields with fake, simulated data that looks similar to the real one butdoes not contain any of the original values.Let\'s load a new dataset that contains a PII field, the`student_placements_pii` demo, and try to generate synthetic versions ofit that do not contain any of the PII fields.**Note**The `student_placements_pii` dataset is a modified version of the`student_placements` dataset with one new field, `address`, whichcontains PII information about the students. Notice that this additional`address` field has been simulated and does not correspond to data fromthe real users.
###Code
data_pii = load_tabular_demo('student_placements_pii')
data_pii.head()
###Output
_____no_output_____
###Markdown
If we use our tabular model on this new data we will see how thesynthetic data that it generates discloses the addresses from the realstudents:
###Code
model = CTGAN(
primary_key='student_id',
)
model.fit(data_pii)
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
More specifically, we can see how all the addresses that have beengenerated actually come from the original dataset:
###Code
new_data_pii.address.isin(data_pii.address).sum()
###Output
_____no_output_____
###Markdown
In order to solve this, we can pass an additional argument`anonymize_fields` to our model when we create the instance. This`anonymize_fields` argument will need to be a dictionary that contains:- The name of the field that we want to anonymize.- The category of the field that we want to use when we generate fake values for it.The list complete list of possible categories can be seen in the [FakerProviders](https://faker.readthedocs.io/en/master/providers.html) page,and it contains a huge list of concepts such as:- name- address- country- city- ssn- credit_card_number- credit_card_expire- credit_card_security_code- email- telephone- \...In this case, since the field is an address, we will pass adictionary indicating the category `address`
###Code
model = CTGAN(
primary_key='student_id',
anonymize_fields={
'address': 'address'
}
)
model.fit(data_pii)
###Output
_____no_output_____
###Markdown
As a result, we can see how the real `address` values have been replacedby other fake addresses:
###Code
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
Which means that none of the original addresses can be found in thesampled data:
###Code
data_pii.address.isin(new_data_pii.address).sum()
###Output
_____no_output_____
###Markdown
Advanced Usage--------------Now that we have discovered the basics, let\'s go over a few moreadvanced usage examples and see the different arguments that we can passto our `CTGAN` Model in order to customize it to our needs. How to modify the CTGAN Hyperparameters?A part from the common Tabular Model arguments, `CTGAN` has a number ofadditional hyperparameters that control its learning behavior and canimpact on the performance of the model, both in terms of quality of thegenerated data and computational time.- `epochs` and `batch_size`: these arguments control the number of iterations that the model will perform to optimize its parameters, as well as the number of samples used in each step. Its default values are `300` and `500` respectively, and `batch_size` needs to always be a value which is multiple of `10`. These hyperparameters have a very direct effect in time the training process lasts but also on the performance of the data, so for new datasets, you might want to start by setting a low value on both of them to see how long the training process takes on your data and later on increase the number to acceptable values in order to improve the performance.- `log_frequency`: Whether to use log frequency of categorical levels in conditional sampling. It defaults to `True`. This argument affects how the model processes the frequencies of the categorical values that are used to condition the rest of the values. In some cases, changing it to `False` could lead to better performance.- `embedding_dim` (int): Size of the random sample passed to the Generator. Defaults to 128.- `generator_dim` (tuple or list of ints): Size of the output samples for each one of the Residuals. A Resiudal Layer will be created for each one of the values provided. Defaults to (256, 256).- `discriminator_dim` (tuple or list of ints): Size of the output samples for each one of the Discriminator Layers. A Linear Layer will be created for each one of the values provided. Defaults to (256, 256).- `generator_lr` (float): Learning rate for the generator. Defaults to 2e-4.- `generator_decay` (float): Generator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_lr` (float): Learning rate for the discriminator. Defaults to 2e-4.- `discriminator_decay` (float): Discriminator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_steps` (int): Number of discriminator updates to do for each generator update. From the WGAN paper: https://arxiv.org/abs/1701.07875. WGAN paper default is 5. Default used is 1 to match original CTGAN implementation.- `verbose`: Whether to print fit progress on stdout. Defaults to `False`.**Warning**Notice that the value that you set on the `batch_size` argument mustalways be a multiple of `10`!As an example, we will try to fit the `CTGAN` model slightly increasingthe number of epochs, reducing the `batch_size`, adding one additionallayer to the models involved and using a smaller wright decay.Before we start, we will evaluate the quality of the previouslygenerated data using the `sdv.evaluation.evaluate` function
###Code
from sdv.evaluation import evaluate
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
Afterwards, we create a new instance of the `CTGAN` model with thehyperparameter values that we want to use
###Code
model = CTGAN(
primary_key='student_id',
epochs=500,
batch_size=100,
generator_dim=(256, 256, 256),
discriminator_dim=(256, 256, 256)
)
###Output
_____no_output_____
###Markdown
And fit to our data.
###Code
model.fit(data)
###Output
_____no_output_____
###Markdown
Finally, we are ready to generate new data and evaluate the results.
###Code
new_data = model.sample(len(data))
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
As we can see, in this case these modifications changed the obtainedresults slightly, but they did neither introduce dramatic changes in theperformance. Conditional SamplingAs the name implies, conditional sampling allows us to sample from a conditional distribution using the `CTGAN` model, which means we can generate only values that satisfy certain conditions. These conditional values can be passed to the `conditions` parameter in the `sample` method either as a dataframe or a dictionary.In case a dictionary is passed, the model will generate as many rows as requested, all of which will satisfy the specified conditions, such as `gender = M`.
###Code
conditions = {
'gender': 'M'
}
model.sample(5, conditions=conditions)
###Output
_____no_output_____
###Markdown
It's also possible to condition on multiple columns, such as `gender = M, 'experience_years': 0`.
###Code
conditions = {
'gender': 'M',
'experience_years': 0
}
model.sample(5, conditions=conditions)
###Output
_____no_output_____
###Markdown
`conditions` can also be passed as a dataframe. In that case, the model will generate one sample for each row of the dataframe, sorted in the same order. Since the model already knows how many samples to generate, passing it as a parameter is unnecessary. For example, if we want to generate three samples where `gender = M` and three samples with `gender = F`, all of them with `work_experience = True`, we can do the following:
###Code
import pandas as pd
conditions = pd.DataFrame({
'gender': ['M', 'M', 'M', 'F', 'F', 'F'],
'work_experience': [True, True, True, True, True, True]
})
model.sample(conditions=conditions)
###Output
_____no_output_____
###Markdown
`CTGAN` also supports conditioning on continuous values, as long as the values are within the range of seen numbers. For example, if all the values of the dataset are within 0 and 1, `CTGAN` will not be able to set this value to 1000.
###Code
conditions = {
'degree_perc': 70.0
}
model.sample(5, conditions=conditions)
###Output
_____no_output_____
###Markdown
CTGAN Model===========In this guide we will go through a series of steps that will let youdiscover functionalities of the `CTGAN` model, including how to:- Create an instance of `CTGAN`.- Fit the instance to your data.- Generate synthetic versions of your data.- Use `CTGAN` to anonymize PII information.- Customize the data transformations to improve the learning process.- Specify hyperparameters to improve the output quality.What is CTGAN?--------------The `sdv.tabular.CTGAN` model is based on the GAN-based Deep Learningdata synthesizer which was presented at the NeurIPS 2020 conference bythe paper titled [Modeling Tabular data using ConditionalGAN](https://arxiv.org/abs/1907.00503).Let\'s now discover how to learn a dataset and later on generatesynthetic data with the same format and statistical properties by usingthe `CTGAN` class from SDV.Quick Usage-----------We will start by loading one of our demo datasets, the`student_placements`, which contains information about MBA students thatapplied for placements during the year 2020.**Warning**In order to follow this guide you need to have `ctgan` installed on yoursystem. If you have not done it yet, please install `ctgan` now byexecuting the command `pip install sdv` in a terminal.
###Code
from sdv.demo import load_tabular_demo
data = load_tabular_demo('student_placements')
data.head()
###Output
_____no_output_____
###Markdown
As you can see, this table contains information about students whichincludes, among other things:- Their id and gender- Their grades and specializations- Their work experience- The salary that they were offered- The duration and dates of their placementYou will notice that there is data with the following characteristics:- There are float, integer, boolean, categorical and datetime values.- There are some variables that have missing data. In particular, all the data related to the placement details is missing in the rows where the student was not placed.Let us use `CTGAN` to learn this data and then sample synthetic dataabout new students to see how well the model captures the characteristicsindicated above. In order to do this you will need to:- Import the `sdv.tabular.CTGAN` class and create an instance of it.- Call its `fit` method passing our table.- Call its `sample` method indicating the number of synthetic rows that you want to generate.
###Code
from sdv.tabular import CTGAN
model = CTGAN()
model.fit(data)
###Output
_____no_output_____
###Markdown
**Note**Notice that the model `fitting` process took care of transforming thedifferent fields using the appropriate [Reversible DataTransforms](http://github.com/sdv-dev/RDT) to ensure that the data has aformat that the underlying CTGANSynthesizer class can handle. Generate synthetic data from the modelOnce the modeling has finished you are ready to generate new syntheticdata by calling the `sample` method from your model passing the numberof rows that we want to generate. The number of rows (``num_rows``)is a required parameter.
###Code
new_data = model.sample(num_rows=200)
###Output
_____no_output_____
###Markdown
This will return a table identical to the one which the model was fittedon, but filled with new data which resembles the original one.
###Code
new_data.head()
###Output
_____no_output_____
###Markdown
**Note**There are a number of other parameters in this method that you can use tooptimize the process of generating synthetic data. Use ``output_file_path``to directly write results to a CSV file, ``batch_size`` to break up samplinginto smaller pieces & track their progress and ``randomize_samples`` todetermine whether to generate the same synthetic data every time.See the API Section for more details. Save and Load the modelIn many scenarios it will be convenient to generate synthetic versionsof your data directly in systems that do not have access to the originaldata source. For example, if you may want to generate testing data onthe fly inside a testing environment that does not have access to yourproduction database. In these scenarios, fitting the model with realdata every time that you need to generate new data is feasible, so youwill need to fit a model in your production environment, save the fittedmodel into a file, send this file to the testing environment and thenload it there to be able to `sample` from it.Let\'s see how this process works. Save and share the modelOnce you have fitted the model, all you need to do is call its `save`method passing the name of the file in which you want to save the model.Note that the extension of the filename is not relevant, but we will beusing the `.pkl` extension to highlight that the serialization protocolused is [pickle](https://docs.python.org/3/library/pickle.html).
###Code
model.save('my_model.pkl')
###Output
_____no_output_____
###Markdown
This will have created a file called `my_model.pkl` in the samedirectory in which you are running SDV.**Important**If you inspect the generated file you will notice that its size is muchsmaller than the size of the data that you used to generate it. This isbecause the serialized model contains **no information about theoriginal data**, other than the parameters it needs to generatesynthetic versions of it. This means that you can safely share this`my_model.pkl` file without the risc of disclosing any of your realdata! Load the model and generate new dataThe file you just generated can be sent over to the system where thesynthetic data will be generated. Once it is there, you can load itusing the `CTGAN.load` method, and then you are ready to sample new datafrom the loaded instance:
###Code
loaded = CTGAN.load('my_model.pkl')
new_data = loaded.sample(num_rows=200)
###Output
_____no_output_____
###Markdown
**Warning**Notice that the system where the model is loaded needs to also have`sdv` and `ctgan` installed, otherwise it will not be able to load themodel and use it. Specifying the Primary Key of the tableOne of the first things that you may have noticed when looking at the demodata is that there is a `student_id` column which acts as the primarykey of the table, and which is supposed to have unique values. Indeed,if we look at the number of times that each value appears, we see thatall of them appear at most once:
###Code
data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
However, if we look at the synthetic data that we generated, we observethat there are some values that appear more than once:
###Code
new_data[new_data.student_id == new_data.student_id.value_counts().index[0]]
###Output
_____no_output_____
###Markdown
This happens because the model was not notified at any point about thefact that the `student_id` had to be unique, so when it generates newdata it will provoke collisions sooner or later. In order to solve this,we can pass the argument `primary_key` to our model when we create it,indicating the name of the column that is the index of the table.
###Code
model = CTGAN(
primary_key='student_id'
)
model.fit(data)
new_data = model.sample(200)
new_data.head()
###Output
_____no_output_____
###Markdown
As a result, the model will learn that this column must be unique andgenerate a unique sequence of values for the column:
###Code
new_data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
Anonymizing Personally Identifiable Information (PII)There will be many cases where the data will contain PersonallyIdentifiable Information which we cannot disclose. In these cases, wewill want our Tabular Models to replace the information within thesefields with fake, simulated data that looks similar to the real one butdoes not contain any of the original values.Let\'s load a new dataset that contains a PII field, the`student_placements_pii` demo, and try to generate synthetic versions ofit that do not contain any of the PII fields.**Note**The `student_placements_pii` dataset is a modified version of the`student_placements` dataset with one new field, `address`, whichcontains PII information about the students. Notice that this additional`address` field has been simulated and does not correspond to data fromthe real users.
###Code
data_pii = load_tabular_demo('student_placements_pii')
data_pii.head()
###Output
_____no_output_____
###Markdown
If we use our tabular model on this new data we will see how thesynthetic data that it generates discloses the addresses from the realstudents:
###Code
model = CTGAN(
primary_key='student_id',
)
model.fit(data_pii)
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
More specifically, we can see how all the addresses that have beengenerated actually come from the original dataset:
###Code
new_data_pii.address.isin(data_pii.address).sum()
###Output
_____no_output_____
###Markdown
In order to solve this, we can pass an additional argument`anonymize_fields` to our model when we create the instance. This`anonymize_fields` argument will need to be a dictionary that contains:- The name of the field that we want to anonymize.- The category of the field that we want to use when we generate fake values for it.The list complete list of possible categories can be seen in the [FakerProviders](https://faker.readthedocs.io/en/master/providers.html) page,and it contains a huge list of concepts such as:- name- address- country- city- ssn- credit_card_number- credit_card_expire- credit_card_security_code- email- telephone- \...In this case, since the field is an address, we will pass adictionary indicating the category `address`
###Code
model = CTGAN(
primary_key='student_id',
anonymize_fields={
'address': 'address'
}
)
model.fit(data_pii)
###Output
_____no_output_____
###Markdown
As a result, we can see how the real `address` values have been replacedby other fake addresses:
###Code
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
Which means that none of the original addresses can be found in thesampled data:
###Code
data_pii.address.isin(new_data_pii.address).sum()
###Output
_____no_output_____
###Markdown
Advanced Usage--------------Now that we have discovered the basics, let\'s go over a few moreadvanced usage examples and see the different arguments that we can passto our `CTGAN` Model in order to customize it to our needs. How to modify the CTGAN Hyperparameters?A part from the common Tabular Model arguments, `CTGAN` has a number ofadditional hyperparameters that control its learning behavior and canimpact on the performance of the model, both in terms of quality of thegenerated data and computational time.- `epochs` and `batch_size`: these arguments control the number of iterations that the model will perform to optimize its parameters, as well as the number of samples used in each step. Its default values are `300` and `500` respectively, and `batch_size` needs to always be a value which is multiple of `10`. These hyperparameters have a very direct effect in time the training process lasts but also on the performance of the data, so for new datasets, you might want to start by setting a low value on both of them to see how long the training process takes on your data and later on increase the number to acceptable values in order to improve the performance.- `log_frequency`: Whether to use log frequency of categorical levels in conditional sampling. It defaults to `True`. This argument affects how the model processes the frequencies of the categorical values that are used to condition the rest of the values. In some cases, changing it to `False` could lead to better performance.- `embedding_dim` (int): Size of the random sample passed to the Generator. Defaults to 128.- `generator_dim` (tuple or list of ints): Size of the output samples for each one of the Residuals. A Resiudal Layer will be created for each one of the values provided. Defaults to (256, 256).- `discriminator_dim` (tuple or list of ints): Size of the output samples for each one of the Discriminator Layers. A Linear Layer will be created for each one of the values provided. Defaults to (256, 256).- `generator_lr` (float): Learning rate for the generator. Defaults to 2e-4.- `generator_decay` (float): Generator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_lr` (float): Learning rate for the discriminator. Defaults to 2e-4.- `discriminator_decay` (float): Discriminator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_steps` (int): Number of discriminator updates to do for each generator update. From the WGAN paper: https://arxiv.org/abs/1701.07875. WGAN paper default is 5. Default used is 1 to match original CTGAN implementation.- `verbose`: Whether to print fit progress on stdout. Defaults to `False`.**Warning**Notice that the value that you set on the `batch_size` argument mustalways be a multiple of `10`!As an example, we will try to fit the `CTGAN` model slightly increasingthe number of epochs, reducing the `batch_size`, adding one additionallayer to the models involved and using a smaller wright decay.Before we start, we will evaluate the quality of the previouslygenerated data using the `sdv.evaluation.evaluate` function
###Code
from sdv.evaluation import evaluate
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
Afterwards, we create a new instance of the `CTGAN` model with thehyperparameter values that we want to use
###Code
model = CTGAN(
primary_key='student_id',
epochs=500,
batch_size=100,
generator_dim=(256, 256, 256),
discriminator_dim=(256, 256, 256)
)
###Output
_____no_output_____
###Markdown
And fit to our data.
###Code
model.fit(data)
###Output
_____no_output_____
###Markdown
Finally, we are ready to generate new data and evaluate the results.
###Code
new_data = model.sample(len(data))
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
As we can see, in this case these modifications changed the obtainedresults slightly, but they did neither introduce dramatic changes in theperformance. Conditional SamplingAs the name implies, conditional sampling allows us to sample from a conditional distribution using the `CTGAN` model, which means we can generate only values that satisfy certain conditions. These conditional values can be passed to the `sample_conditions` method as a list of `sdv.sampling.Condition` objects or to the `sample_remaining_columns` method as a dataframe. When specifying a `sdv.sampling.Condition` object, we can pass in the desired conditions as a dictionary, as well as specify the number of desired rows for that condition.
###Code
from sdv.sampling import Condition
condition = Condition({
'gender': 'M'
}, num_rows=5)
model.sample_conditions(conditions=[condition])
###Output
_____no_output_____
###Markdown
It's also possible to condition on multiple columns, such as `gender = M, 'experience_years': 0`.
###Code
condition = Condition({
'gender': 'M',
'experience_years': 0
}, num_rows=5)
model.sample_conditions(conditions=[condition])
###Output
_____no_output_____
###Markdown
In the `sample_remaining_columns` method, `conditions` is passed as a dataframe. In that case, the model will generate one sample for each row of the dataframe, sorted in the same order. Since the model already knows how many samples to generate, passing it as a parameter is unnecessary. For example, if we want to generate three samples where `gender = M` and three samples with `gender = F`, all of them with `work_experience = True`, we can do the following:
###Code
import pandas as pd
conditions = pd.DataFrame({
'gender': ['M', 'M', 'M', 'F', 'F', 'F'],
'work_experience': [True, True, True, True, True, True]
})
model.sample_remaining_columns(conditions)
###Output
_____no_output_____
###Markdown
`CTGAN` also supports conditioning on continuous values, as long as the values are within the range of seen numbers. For example, if all the values of the dataset are within 0 and 1, `CTGAN` will not be able to set this value to 1000.
###Code
condition = Condition({
'degree_perc': 70.0
}, num_rows=5)
model.sample_conditions(conditions=[condition])
###Output
_____no_output_____
###Markdown
CTGAN Model===========In this guide we will go through a series of steps that will let youdiscover functionalities of the `CTGAN` model, including how to:- Create an instance of `CTGAN`.- Fit the instance to your data.- Generate synthetic versions of your data.- Use `CTGAN` to anonymize PII information.- Customize the data transformations to improve the learning process.- Specify hyperparameters to improve the output quality.What is CTGAN?--------------The `sdv.tabular.CTGAN` model is based on the GAN-based Deep Learningdata synthesizer which was presented at the NeurIPS 2020 conference bythe paper titled [Modeling Tabular data using ConditionalGAN](https://arxiv.org/abs/1907.00503).Let\'s now discover how to learn a dataset and later on generatesynthetic data with the same format and statistical properties by usingthe `CTGAN` class from SDV.Quick Usage-----------We will start by loading one of our demo datasets, the`student_placements`, which contains information about MBA students thatapplied for placements during the year 2020.**Warning**In order to follow this guide you need to have `ctgan` installed on yoursystem. If you have not done it yet, please install `ctgan` now byexecuting the command `pip install sdv` in a terminal.
###Code
from sdv.demo import load_tabular_demo
data = load_tabular_demo('student_placements')
data.head()
###Output
_____no_output_____
###Markdown
As you can see, this table contains information about students whichincludes, among other things:- Their id and gender- Their grades and specializations- Their work experience- The salary that they were offered- The duration and dates of their placementYou will notice that there is data with the following characteristics:- There are float, integer, boolean, categorical and datetime values.- There are some variables that have missing data. In particular, all the data related to the placement details is missing in the rows where the student was not placed.Let us use `CTGAN` to learn this data and then sample synthetic dataabout new students to see how well the model captures the characteristicsindicated above. In order to do this you will need to:- Import the `sdv.tabular.CTGAN` class and create an instance of it.- Call its `fit` method passing our table.- Call its `sample` method indicating the number of synthetic rows that you want to generate.
###Code
from sdv.tabular import CTGAN
model = CTGAN()
model.fit(data)
###Output
_____no_output_____
###Markdown
**Note**Notice that the model `fitting` process took care of transforming thedifferent fields using the appropriate [Reversible DataTransforms](http://github.com/sdv-dev/RDT) to ensure that the data has aformat that the underlying CTGANSynthesizer class can handle. Generate synthetic data from the modelOnce the modeling has finished you are ready to generate new syntheticdata by calling the `sample` method from your model passing the numberof rows that we want to generate.
###Code
new_data = model.sample(200)
###Output
_____no_output_____
###Markdown
This will return a table identical to the one which the model was fittedon, but filled with new data which resembles the original one.
###Code
new_data.head()
###Output
_____no_output_____
###Markdown
**Note**You can control the number of rows by specifying the number of `samples`in the `model.sample()`. To test, try `model.sample(10000)`.Note that the original table only had \~200 rows. Save and Load the modelIn many scenarios it will be convenient to generate synthetic versionsof your data directly in systems that do not have access to the originaldata source. For example, if you may want to generate testing data onthe fly inside a testing environment that does not have access to yourproduction database. In these scenarios, fitting the model with realdata every time that you need to generate new data is feasible, so youwill need to fit a model in your production environment, save the fittedmodel into a file, send this file to the testing environment and thenload it there to be able to `sample` from it.Let\'s see how this process works. Save and share the modelOnce you have fitted the model, all you need to do is call its `save`method passing the name of the file in which you want to save the model.Note that the extension of the filename is not relevant, but we will beusing the `.pkl` extension to highlight that the serialization protocolused is [pickle](https://docs.python.org/3/library/pickle.html).
###Code
model.save('my_model.pkl')
###Output
_____no_output_____
###Markdown
This will have created a file called `my_model.pkl` in the samedirectory in which you are running SDV.**Important**If you inspect the generated file you will notice that its size is muchsmaller than the size of the data that you used to generate it. This isbecause the serialized model contains **no information about theoriginal data**, other than the parameters it needs to generatesynthetic versions of it. This means that you can safely share this`my_model.pkl` file without the risc of disclosing any of your realdata! Load the model and generate new dataThe file you just generated can be sent over to the system where thesynthetic data will be generated. Once it is there, you can load itusing the `CTGAN.load` method, and then you are ready to sample new datafrom the loaded instance:
###Code
loaded = CTGAN.load('my_model.pkl')
new_data = loaded.sample(200)
###Output
_____no_output_____
###Markdown
**Warning**Notice that the system where the model is loaded needs to also have`sdv` and `ctgan` installed, otherwise it will not be able to load themodel and use it. Specifying the Primary Key of the tableOne of the first things that you may have noticed when looking at the demodata is that there is a `student_id` column which acts as the primarykey of the table, and which is supposed to have unique values. Indeed,if we look at the number of times that each value appears, we see thatall of them appear at most once:
###Code
data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
However, if we look at the synthetic data that we generated, we observethat there are some values that appear more than once:
###Code
new_data[new_data.student_id == new_data.student_id.value_counts().index[0]]
###Output
_____no_output_____
###Markdown
This happens because the model was not notified at any point about thefact that the `student_id` had to be unique, so when it generates newdata it will provoke collisions sooner or later. In order to solve this,we can pass the argument `primary_key` to our model when we create it,indicating the name of the column that is the index of the table.
###Code
model = CTGAN(
primary_key='student_id'
)
model.fit(data)
new_data = model.sample(200)
new_data.head()
###Output
_____no_output_____
###Markdown
As a result, the model will learn that this column must be unique andgenerate a unique sequence of values for the column:
###Code
new_data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
Anonymizing Personally Identifiable Information (PII)There will be many cases where the data will contain PersonallyIdentifiable Information which we cannot disclose. In these cases, wewill want our Tabular Models to replace the information within thesefields with fake, simulated data that looks similar to the real one butdoes not contain any of the original values.Let\'s load a new dataset that contains a PII field, the`student_placements_pii` demo, and try to generate synthetic versions ofit that do not contain any of the PII fields.**Note**The `student_placements_pii` dataset is a modified version of the`student_placements` dataset with one new field, `address`, whichcontains PII information about the students. Notice that this additional`address` field has been simulated and does not correspond to data fromthe real users.
###Code
data_pii = load_tabular_demo('student_placements_pii')
data_pii.head()
###Output
_____no_output_____
###Markdown
If we use our tabular model on this new data we will see how thesynthetic data that it generates discloses the addresses from the realstudents:
###Code
model = CTGAN(
primary_key='student_id',
)
model.fit(data_pii)
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
More specifically, we can see how all the addresses that have beengenerated actually come from the original dataset:
###Code
new_data_pii.address.isin(data_pii.address).sum()
###Output
_____no_output_____
###Markdown
In order to solve this, we can pass an additional argument`anonymize_fields` to our model when we create the instance. This`anonymize_fields` argument will need to be a dictionary that contains:- The name of the field that we want to anonymize.- The category of the field that we want to use when we generate fake values for it.The list complete list of possible categories can be seen in the [FakerProviders](https://faker.readthedocs.io/en/master/providers.html) page,and it contains a huge list of concepts such as:- name- address- country- city- ssn- credit_card_number- credit_card_expire- credit_card_security_code- email- telephone- \...In this case, since the field is an e-mail address, we will pass adictionary indicating the category `address`
###Code
model = CTGAN(
primary_key='student_id',
anonymize_fields={
'address': 'address'
}
)
model.fit(data_pii)
###Output
_____no_output_____
###Markdown
As a result, we can see how the real `address` values have been replacedby other fake addresses:
###Code
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
Which means that none of the original addresses can be found in thesampled data:
###Code
data_pii.address.isin(new_data_pii.address).sum()
###Output
_____no_output_____
###Markdown
Advanced Usage--------------Now that we have discovered the basics, let\'s go over a few moreadvanced usage examples and see the different arguments that we can passto our `CTGAN` Model in order to customize it to our needs. How to modify the CTGAN Hyperparameters?A part from the common Tabular Model arguments, `CTGAN` has a number ofadditional hyperparameters that control its learning behavior and canimpact on the performance of the model, both in terms of quality of thegenerated data and computational time.- `epochs` and `batch_size`: these arguments control the number of iterations that the model will perform to optimize its parameters, as well as the number of samples used in each step. Its default values are `300` and `500` respectively, and `batch_size` needs to always be a value which is multiple of `10`. These hyperparameters have a very direct effect in time the training process lasts but also on the performance of the data, so for new datasets, you might want to start by setting a low value on both of them to see how long the training process takes on your data and later on increase the number to acceptable values in order to improve the performance.- `log_frequency`: Whether to use log frequency of categorical levels in conditional sampling. It defaults to `True`. This argument affects how the model processes the frequencies of the categorical values that are used to condition the rest of the values. In some cases, changing it to `False` could lead to better performance.- `embedding_dim` (int): Size of the random sample passed to the Generator. Defaults to 128.- `generator_dim` (tuple or list of ints): Size of the output samples for each one of the Residuals. A Resiudal Layer will be created for each one of the values provided. Defaults to (256, 256).- `discriminator_dim` (tuple or list of ints): Size of the output samples for each one of the Discriminator Layers. A Linear Layer will be created for each one of the values provided. Defaults to (256, 256).- `generator_lr` (float): Learning rate for the generator. Defaults to 2e-4.- `generator_decay` (float): Generator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_lr` (float): Learning rate for the discriminator. Defaults to 2e-4.- `discriminator_decay` (float): Discriminator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_steps` (int): Number of discriminator updates to do for each generator update. From the WGAN paper: https://arxiv.org/abs/1701.07875. WGAN paper default is 5. Default used is 1 to match original CTGAN implementation.- `verbose`: Whether to print fit progress on stdout. Defaults to `False`.**Warning**Notice that the value that you set on the `batch_size` argument mustalways be a multiple of `10`!As an example, we will try to fit the `CTGAN` model slightly increasingthe number of epochs, reducing the `batch_size`, adding one additionallayer to the models involved and using a smaller wright decay.Before we start, we will evaluate the quality of the previouslygenerated data using the `sdv.evaluation.evaluate` function
###Code
from sdv.evaluation import evaluate
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
Afterwards, we create a new instance of the `CTGAN` model with thehyperparameter values that we want to use
###Code
model = CTGAN(
primary_key='student_id',
epochs=500,
batch_size=100,
generator_dim=(256, 256, 256),
discriminator_dim=(256, 256, 256)
)
###Output
_____no_output_____
###Markdown
And fit to our data.
###Code
model.fit(data)
###Output
_____no_output_____
###Markdown
Finally, we are ready to generate new data and evaluate the results.
###Code
new_data = model.sample(len(data))
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
As we can see, in this case these modifications changed the obtainedresults slightly, but they did neither introduce dramatic changes in theperformance. Conditional SamplingAs the name implies, conditional sampling allows us to sample from a conditional distribution using the `CTGAN` model, which means we can generate only values that satisfy certain conditions. These conditional values can be passed to the `conditions` parameter in the `sample` method either as a dataframe or a dictionary.In case a dictionary is passed, the model will generate as many rows as requested, all of which will satisfy the specified conditions, such as `gender = M`.
###Code
conditions = {
'gender': 'M'
}
model.sample(5, conditions=conditions)
###Output
_____no_output_____
###Markdown
It's also possible to condition on multiple columns, such as `gender = M, 'experience_years': 0`.
###Code
conditions = {
'gender': 'M',
'experience_years': 0
}
model.sample(5, conditions=conditions)
###Output
_____no_output_____
###Markdown
`conditions` can also be passed as a dataframe. In that case, the model will generate one sample for each row of the dataframe, sorted in the same order. Since the model already knows how many samples to generate, passing it as a parameter is unnecessary. For example, if we want to generate three samples where `gender = M` and three samples with `gender = F`, all of them with `work_experience = True`, we can do the following:
###Code
import pandas as pd
conditions = pd.DataFrame({
'gender': ['M', 'M', 'M', 'F', 'F', 'F'],
'work_experience': [True, True, True, True, True, True]
})
model.sample(conditions=conditions)
###Output
_____no_output_____
###Markdown
`CTGAN` also supports conditioning on continuous values, as long as the values are within the range of seen numbers. For example, if all the values of the dataset are within 0 and 1, `CTGAN` will not be able to set this value to 1000.
###Code
conditions = {
'degree_perc': 70.0
}
model.sample(5, conditions=conditions)
###Output
_____no_output_____
###Markdown
CTGAN Model===========In this guide we will go through a series of steps that will let youdiscover functionalities of the `CTGAN` model, including how to:- Create an instance of `CTGAN`.- Fit the instance to your data.- Generate synthetic versions of your data.- Use `CTGAN` to anonymize PII information.- Customize the data transformations to improve the learning process.- Specify hyperparameters to improve the output quality.What is CTGAN?--------------The `sdv.tabular.CTGAN` model is based on the GAN-based Deep Learningdata synthesizer which was presented at the NeurIPS 2020 conference bythe paper titled [Modeling Tabular data using ConditionalGAN](https://arxiv.org/abs/1907.00503).Let\'s now discover how to learn a dataset and later on generatesynthetic data with the same format and statistical properties by usingthe `CTGAN` class from SDV.Quick Usage-----------We will start by loading one of our demo datasets, the`student_placements`, which contains information about MBA students thatapplied for placements during the year 2020.**Warning**In order to follow this guide you need to have `ctgan` installed on yoursystem. If you have not done it yet, please install `ctgan` now byexecuting the command `pip install sdv` in a terminal.
###Code
from sdv.demo import load_tabular_demo
data = load_tabular_demo('student_placements')
data.head()
###Output
_____no_output_____
###Markdown
As you can see, this table contains information about students whichincludes, among other things:- Their id and gender- Their grades and specializations- Their work experience- The salary that they where offered- The duration and dates of their placementYou will notice that there is data with the following characteristics:- There are float, integer, boolean, categorical and datetime values.- There are some variables that have missing data. In particular, all the data related to the placement details is missing in the rows where the student was not placed.Let us use `CTGAN` to learn this data and then sample synthetic dataabout new students to see how well de model captures the characteristicsindicated above. In order to do this you will need to:- Import the `sdv.tabular.CTGAN` class and create an instance of it.- Call its `fit` method passing our table.- Call its `sample` method indicating the number of synthetic rows that you want to generate.
###Code
from sdv.tabular import CTGAN
model = CTGAN()
model.fit(data)
###Output
_____no_output_____
###Markdown
**Note**Notice that the model `fitting` process took care of transforming thedifferent fields using the appropriate [Reversible DataTransforms](http://github.com/sdv-dev/RDT) to ensure that the data has aformat that the underlying CTGANSynthesizer class can handle. Generate synthetic data from the modelOnce the modeling has finished you are ready to generate new syntheticdata by calling the `sample` method from your model passing the numberof rows that we want to generate.
###Code
new_data = model.sample(200)
###Output
_____no_output_____
###Markdown
This will return a table identical to the one which the model was fittedon, but filled with new data which resembles the original one.
###Code
new_data.head()
###Output
_____no_output_____
###Markdown
**Note**You can control the number of rows by specifying the number of `samples`in the `model.sample()`. To test, try `model.sample(10000)`.Note that the original table only had \~200 rows. Save and Load the modelIn many scenarios it will be convenient to generate synthetic versionsof your data directly in systems that do not have access to the originaldata source. For example, if you may want to generate testing data onthe fly inside a testing environment that does not have access to yourproduction database. In these scenarios, fitting the model with realdata every time that you need to generate new data is feasible, so youwill need to fit a model in your production environment, save the fittedmodel into a file, send this file to the testing environment and thenload it there to be able to `sample` from it.Let\'s see how this process works. Save and share the modelOnce you have fitted the model, all you need to do is call its `save`method passing the name of the file in which you want to save the model.Note that the extension of the filename is not relevant, but we will beusing the `.pkl` extension to highlight that the serialization protocolused is [pickle](https://docs.python.org/3/library/pickle.html).
###Code
model.save('my_model.pkl')
###Output
_____no_output_____
###Markdown
This will have created a file called `my_model.pkl` in the samedirectory in which you are running SDV.**Important**If you inspect the generated file you will notice that its size is muchsmaller than the size of the data that you used to generate it. This isbecause the serialized model contains **no information about theoriginal data**, other than the parameters it needs to generatesynthetic versions of it. This means that you can safely share this`my_model.pkl` file without the risc of disclosing any of your realdata! Load the model and generate new dataThe file you just generated can be send over to the system where thesynthetic data will be generated. Once it is there, you can load itusing the `CTGAN.load` method, and then you are ready to sample new datafrom the loaded instance:
###Code
loaded = CTGAN.load('my_model.pkl')
new_data = loaded.sample(200)
###Output
_____no_output_____
###Markdown
**Warning**Notice that the system where the model is loaded needs to also have`sdv` and `ctgan` installed, otherwise it will not be able to load themodel and use it. Specifying the Primary Key of the tableOne of the first things that you may have noticed when looking that demodata is that there is a `student_id` column which acts as the primarykey of the table, and which is supposed to have unique values. Indeed,if we look at the number of times that each value appears, we see thatall of them appear at most once:
###Code
data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
However, if we look at the synthetic data that we generated, we observethat there are some values that appear more than once:
###Code
new_data[new_data.student_id == new_data.student_id.value_counts().index[0]]
###Output
_____no_output_____
###Markdown
This happens because the model was not notified at any point about thefact that the `student_id` had to be unique, so when it generates newdata it will provoke collisions sooner or later. In order to solve this,we can pass the argument `primary_key` to our model when we create it,indicating the name of the column that is the index of the table.
###Code
model = CTGAN(
primary_key='student_id'
)
model.fit(data)
new_data = model.sample(200)
new_data.head()
###Output
_____no_output_____
###Markdown
As a result, the model will learn that this column must be unique andgenerate a unique sequence of values for the column:
###Code
new_data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
Anonymizing Personally Identifiable Information (PII)There will be many cases where the data will contain PersonallyIdentifiable Information which we cannot disclose. In these cases, wewill want our Tabular Models to replace the information within thesefields with fake, simulated data that looks similar to the real one butdoes not contain any of the original values.Let\'s load a new dataset that contains a PII field, the`student_placements_pii` demo, and try to generate synthetic versions ofit that do not contain any of the PII fields.**Note**The `student_placements_pii` dataset is a modified version of the`student_placements` dataset with one new field, `address`, whichcontains PII information about the students. Notice that this additional`address` field has been simulated and does not correspond to data fromthe real users.
###Code
data_pii = load_tabular_demo('student_placements_pii')
data_pii.head()
###Output
_____no_output_____
###Markdown
If we use our tabular model on this new data we will see how thesynthetic data that it generates discloses the addresses from the realstudents:
###Code
model = CTGAN(
primary_key='student_id',
)
model.fit(data_pii)
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
More specifically, we can see how all the addresses that have beengenerated actually come from the original dataset:
###Code
new_data_pii.address.isin(data_pii.address).sum()
###Output
_____no_output_____
###Markdown
In order to solve this, we can pass an additional argument`anonymize_fields` to our model when we create the instance. This`anonymize_fields` argument will need to be a dictionary that contains:- The name of the field that we want to anonymize.- The category of the field that we want to use when we generate fake values for it.The list complete list of possible categories can be seen in the [FakerProviders](https://faker.readthedocs.io/en/master/providers.html) page,and it contains a huge list of concepts such as:- name- address- country- city- ssn- credit_card_number- credit_card_expire- credit_card_security_code- email- telephone- \...In this case, since the field is an e-mail address, we will pass adictionary indicating the category `address`
###Code
model = CTGAN(
primary_key='student_id',
anonymize_fields={
'address': 'address'
}
)
model.fit(data_pii)
###Output
_____no_output_____
###Markdown
As a result, we can see how the real `address` values have been replacedby other fake addresses:
###Code
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
Which means that none of the original addresses can be found in thesampled data:
###Code
data_pii.address.isin(new_data_pii.address).sum()
###Output
_____no_output_____
###Markdown
Advanced Usage--------------Now that we have discovered the basics, let\'s go over a few moreadvanced usage examples and see the different arguments that we can passto our `CTGAN` Model in order to customize it to our needs. How to modify the CTGAN Hyperparameters?A part from the common Tabular Model arguments, `CTGAN` has a number ofadditional hyperparameters that control its learning behavior and canimpact on the performance of the model, both in terms of quality of thegenerated data and computational time.- `epochs` and `batch_size`: these arguments control the number of iterations that the model will perform to optimize its parameters, as well as the number of samples used in each step. Its default values are `300` and `500` respectively, and `batch_size` needs to always be a value which is multiple of `10`. These hyperparameters have a very direct effect in time the training process lasts but also on the performance of the data, so for new datasets, you might want to start by setting a low value on both of them to see how long the training process takes on your data and later on increase the number to acceptable values in order to improve the performance.- `log_frequency`: Whether to use log frequency of categorical levels in conditional sampling. It defaults to `True`. This argument affects how the model processes the frequencies of the categorical values that are used to condition the rest of the values. In some cases, changing it to `False` could lead to better performance.- `embedding_dim` (int): Size of the random sample passed to the Generator. Defaults to 128.- `generator_dim` (tuple or list of ints): Size of the output samples for each one of the Residuals. A Resiudal Layer will be created for each one of the values provided. Defaults to (256, 256).- `discriminator_dim` (tuple or list of ints): Size of the output samples for each one of the Discriminator Layers. A Linear Layer will be created for each one of the values provided. Defaults to (256, 256).- `generator_lr` (float): Learning rate for the generator. Defaults to 2e-4.- `generator_decay` (float): Generator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_lr` (float): Learning rate for the discriminator. Defaults to 2e-4.- `discriminator_decay` (float): Discriminator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_steps` (int): Number of discriminator updates to do for each generator update. From the WGAN paper: https://arxiv.org/abs/1701.07875. WGAN paper default is 5. Default used is 1 to match original CTGAN implementation.- `verbose`: Whether to print fit progress on stdout. Defaults to `False`.**Warning**Notice that the value that you set on the `batch_size` argument mustalways be a multiple of `10`!As an example, we will try to fit the `CTGAN` model slightly increasingthe number of epochs, reducing the `batch_size`, adding one additionallayer to the models involved and using a smaller wright decay.Before we start, we will evaluate the quality of the previouslygenerated data using the `sdv.evaluation.evaluate` function
###Code
from sdv.evaluation import evaluate
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
Afterwards, we create a new instance of the `CTGAN` model with thehyperparameter values that we want to use
###Code
model = CTGAN(
primary_key='student_id',
epochs=500,
batch_size=100,
generator_dim=(256, 256, 256),
discriminator_dim=(256, 256, 256)
)
###Output
_____no_output_____
###Markdown
And fit to our data.
###Code
model.fit(data)
###Output
_____no_output_____
###Markdown
Finally, we are ready to generate new data and evaluate the results.
###Code
new_data = model.sample(len(data))
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
CTGAN Model===========In this guide we will go through a series of steps that will let youdiscover functionalities of the `CTGAN` model, including how to:- Create an instance of `CTGAN`.- Fit the instance to your data.- Generate synthetic versions of your data.- Use `CTGAN` to anonymize PII information.- Customize the data transformations to improve the learning process.- Specify hyperparameters to improve the output quality.What is CTGAN?--------------The `sdv.tabular.CTGAN` model is based on the GAN-based Deep Learningdata synthesizer which was presented at the NeurIPS 2020 conference bythe paper titled [Modeling Tabular data using ConditionalGAN](https://arxiv.org/abs/1907.00503).Let\'s now discover how to learn a dataset and later on generatesynthetic data with the same format and statistical properties by usingthe `CTGAN` class from SDV.Quick Usage-----------We will start by loading one of our demo datasets, the`student_placements`, which contains information about MBA students thatapplied for placements during the year 2020.**Warning**In order to follow this guide you need to have `ctgan` installed on yoursystem. If you have not done it yet, please install `ctgan` now byexecuting the command `pip install sdv` in a terminal.
###Code
from sdv.demo import load_tabular_demo
data = load_tabular_demo('student_placements')
data.head()
###Output
_____no_output_____
###Markdown
As you can see, this table contains information about students whichincludes, among other things:- Their id and gender- Their grades and specializations- Their work experience- The salary that they where offered- The duration and dates of their placementYou will notice that there is data with the following characteristics:- There are float, integer, boolean, categorical and datetime values.- There are some variables that have missing data. In particular, all the data related to the placement details is missing in the rows where the student was not placed.Let us use `CTGAN` to learn this data and then sample synthetic dataabout new students to see how well de model captures the characteristicsindicated above. In order to do this you will need to:- Import the `sdv.tabular.CTGAN` class and create an instance of it.- Call its `fit` method passing our table.- Call its `sample` method indicating the number of synthetic rows that you want to generate.
###Code
from sdv.tabular import CTGAN
model = CTGAN()
model.fit(data)
###Output
_____no_output_____
###Markdown
**Note**Notice that the model `fitting` process took care of transforming thedifferent fields using the appropriate [Reversible DataTransforms](http://github.com/sdv-dev/RDT) to ensure that the data has aformat that the underlying CTGANSynthesizer class can handle. Generate synthetic data from the modelOnce the modeling has finished you are ready to generate new syntheticdata by calling the `sample` method from your model passing the numberof rows that we want to generate.
###Code
new_data = model.sample(200)
###Output
_____no_output_____
###Markdown
This will return a table identical to the one which the model was fittedon, but filled with new data which resembles the original one.
###Code
new_data.head()
###Output
_____no_output_____
###Markdown
**Note**You can control the number of rows by specifying the number of `samples`in the `model.sample()`. To test, try `model.sample(10000)`.Note that the original table only had \~200 rows. Save and Load the modelIn many scenarios it will be convenient to generate synthetic versionsof your data directly in systems that do not have access to the originaldata source. For example, if you may want to generate testing data onthe fly inside a testing environment that does not have access to yourproduction database. In these scenarios, fitting the model with realdata every time that you need to generate new data is feasible, so youwill need to fit a model in your production environment, save the fittedmodel into a file, send this file to the testing environment and thenload it there to be able to `sample` from it.Let\'s see how this process works. Save and share the modelOnce you have fitted the model, all you need to do is call its `save`method passing the name of the file in which you want to save the model.Note that the extension of the filename is not relevant, but we will beusing the `.pkl` extension to highlight that the serialization protocolused is [pickle](https://docs.python.org/3/library/pickle.html).
###Code
model.save('my_model.pkl')
###Output
_____no_output_____
###Markdown
This will have created a file called `my_model.pkl` in the samedirectory in which you are running SDV.**Important**If you inspect the generated file you will notice that its size is muchsmaller than the size of the data that you used to generate it. This isbecause the serialized model contains **no information about theoriginal data**, other than the parameters it needs to generatesynthetic versions of it. This means that you can safely share this`my_model.pkl` file without the risc of disclosing any of your realdata! Load the model and generate new dataThe file you just generated can be send over to the system where thesynthetic data will be generated. Once it is there, you can load itusing the `CTGAN.load` method, and then you are ready to sample new datafrom the loaded instance:
###Code
loaded = CTGAN.load('my_model.pkl')
new_data = loaded.sample(200)
###Output
_____no_output_____
###Markdown
**Warning**Notice that the system where the model is loaded needs to also have`sdv` and `ctgan` installed, otherwise it will not be able to load themodel and use it. Specifying the Primary Key of the tableOne of the first things that you may have noticed when looking that demodata is that there is a `student_id` column which acts as the primarykey of the table, and which is supposed to have unique values. Indeed,if we look at the number of times that each value appears, we see thatall of them appear at most once:
###Code
data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
However, if we look at the synthetic data that we generated, we observethat there are some values that appear more than once:
###Code
new_data[new_data.student_id == new_data.student_id.value_counts().index[0]]
###Output
_____no_output_____
###Markdown
This happens because the model was not notified at any point about thefact that the `student_id` had to be unique, so when it generates newdata it will provoke collisions sooner or later. In order to solve this,we can pass the argument `primary_key` to our model when we create it,indicating the name of the column that is the index of the table.
###Code
model = CTGAN(
primary_key='student_id'
)
model.fit(data)
new_data = model.sample(200)
new_data.head()
###Output
_____no_output_____
###Markdown
As a result, the model will learn that this column must be unique andgenerate a unique sequence of values for the column:
###Code
new_data.student_id.value_counts().max()
###Output
_____no_output_____
###Markdown
Anonymizing Personally Identifiable Information (PII)There will be many cases where the data will contain PersonallyIdentifiable Information which we cannot disclose. In these cases, wewill want our Tabular Models to replace the information within thesefields with fake, simulated data that looks similar to the real one butdoes not contain any of the original values.Let\'s load a new dataset that contains a PII field, the`student_placements_pii` demo, and try to generate synthetic versions ofit that do not contain any of the PII fields.**Note**The `student_placements_pii` dataset is a modified version of the`student_placements` dataset with one new field, `address`, whichcontains PII information about the students. Notice that this additional`address` field has been simulated and does not correspond to data fromthe real users.
###Code
data_pii = load_tabular_demo('student_placements_pii')
data_pii.head()
###Output
_____no_output_____
###Markdown
If we use our tabular model on this new data we will see how thesynthetic data that it generates discloses the addresses from the realstudents:
###Code
model = CTGAN(
primary_key='student_id',
)
model.fit(data_pii)
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
More specifically, we can see how all the addresses that have beengenerated actually come from the original dataset:
###Code
new_data_pii.address.isin(data_pii.address).sum()
###Output
_____no_output_____
###Markdown
In order to solve this, we can pass an additional argument`anonymize_fields` to our model when we create the instance. This`anonymize_fields` argument will need to be a dictionary that contains:- The name of the field that we want to anonymize.- The category of the field that we want to use when we generate fake values for it.The list complete list of possible categories can be seen in the [FakerProviders](https://faker.readthedocs.io/en/master/providers.html) page,and it contains a huge list of concepts such as:- name- address- country- city- ssn- credit_card_number- credit_card_expire- credit_card_security_code- email- telephone- \...In this case, since the field is an e-mail address, we will pass adictionary indicating the category `address`
###Code
model = CTGAN(
primary_key='student_id',
anonymize_fields={
'address': 'address'
}
)
model.fit(data_pii)
###Output
_____no_output_____
###Markdown
As a result, we can see how the real `address` values have been replacedby other fake addresses:
###Code
new_data_pii = model.sample(200)
new_data_pii.head()
###Output
_____no_output_____
###Markdown
Which means that none of the original addresses can be found in thesampled data:
###Code
data_pii.address.isin(new_data_pii.address).sum()
###Output
_____no_output_____
###Markdown
Advanced Usage--------------Now that we have discovered the basics, let\'s go over a few moreadvanced usage examples and see the different arguments that we can passto our `CTGAN` Model in order to customize it to our needs. How to modify the CTGAN Hyperparameters?A part from the common Tabular Model arguments, `CTGAN` has a number ofadditional hyperparameters that control its learning behavior and canimpact on the performance of the model, both in terms of quality of thegenerated data and computational time.- `epochs` and `batch_size`: these arguments control the number of iterations that the model will perform to optimize its parameters, as well as the number of samples used in each step. Its default values are `300` and `500` respectively, and `batch_size` needs to always be a value which is multiple of `10`. These hyperparameters have a very direct effect in time the training process lasts but also on the performance of the data, so for new datasets, you might want to start by setting a low value on both of them to see how long the training process takes on your data and later on increase the number to acceptable values in order to improve the performance.- `log_frequency`: Whether to use log frequency of categorical levels in conditional sampling. It defaults to `True`. This argument affects how the model processes the frequencies of the categorical values that are used to condition the rest of the values. In some cases, changing it to `False` could lead to better performance.- `embedding_dim` (int): Size of the random sample passed to the Generator. Defaults to 128.- `generator_dim` (tuple or list of ints): Size of the output samples for each one of the Residuals. A Resiudal Layer will be created for each one of the values provided. Defaults to (256, 256).- `discriminator_dim` (tuple or list of ints): Size of the output samples for each one of the Discriminator Layers. A Linear Layer will be created for each one of the values provided. Defaults to (256, 256).- `generator_lr` (float): Learning rate for the generator. Defaults to 2e-4.- `generator_decay` (float): Generator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_lr` (float): Learning rate for the discriminator. Defaults to 2e-4.- `discriminator_decay` (float): Discriminator weight decay for the Adam Optimizer. Defaults to 1e-6.- `discriminator_steps` (int): Number of discriminator updates to do for each generator update. From the WGAN paper: https://arxiv.org/abs/1701.07875. WGAN paper default is 5. Default used is 1 to match original CTGAN implementation.- `verbose`: Whether to print fit progress on stdout. Defaults to `False`.**Warning**Notice that the value that you set on the `batch_size` argument mustalways be a multiple of `10`!As an example, we will try to fit the `CTGAN` model slightly increasingthe number of epochs, reducing the `batch_size`, adding one additionallayer to the models involved and using a smaller wright decay.Before we start, we will evaluate the quality of the previouslygenerated data using the `sdv.evaluation.evaluate` function
###Code
from sdv.evaluation import evaluate
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
Afterwards, we create a new instance of the `CTGAN` model with thehyperparameter values that we want to use
###Code
model = CTGAN(
primary_key='student_id',
epochs=500,
batch_size=100,
generator_dim=(256, 256, 256),
discriminator_dim=(256, 256, 256)
)
###Output
_____no_output_____
###Markdown
And fit to our data.
###Code
model.fit(data)
###Output
_____no_output_____
###Markdown
Finally, we are ready to generate new data and evaluate the results.
###Code
new_data = model.sample(len(data))
evaluate(new_data, data)
###Output
_____no_output_____
###Markdown
As we can see, in this case these modifications changed the obtainedresults slightly, but they did neither introduce dramatic changes in theperformance. Conditional SamplingAs the name implies, conditional sampling allows us to sample from a conditional distribution using the `CTGAN` model, which means we can generate only values that satisfy certain conditions. These conditional values can be passed to the `conditions` parameter in the `sample` method either as a dataframe or a dictionary.In case a dictionary is passed, the model will generate as many rows as requested, all of which will satisfy the specified conditions, such as `gender = M`.
###Code
conditions = {
'gender': 'M'
}
model.sample(5, conditions=conditions)
###Output
_____no_output_____
###Markdown
It's also possible to condition on multiple columns, such as `gender = M, 'experience_years': 0`.
###Code
conditions = {
'gender': 'M',
'experience_years': 0
}
model.sample(5, conditions=conditions)
###Output
_____no_output_____
###Markdown
`conditions` can also be passed as a dataframe. In that case, the model will generate one sample for each row of the dataframe, sorted in the same order. Since the model already knows how many samples to generate, passing it as a parameter is unnecessary. For example, if we want to generate three samples where `gender = M` and three samples with `gender = F`, all of them with `work_experience = True`, we can do the following:
###Code
import pandas as pd
conditions = pd.DataFrame({
'gender': ['M', 'M', 'M', 'F', 'F', 'F'],
'work_experience': [True, True, True, True, True, True]
})
model.sample(conditions=conditions)
###Output
_____no_output_____
###Markdown
`CTGAN` also supports conditioning on continuous values, as long as the values are within the range of seen numbers. For example, if all the values of the dataset are within 0 and 1, `CTGAN` will not be able to set this value to 1000.
###Code
conditions = {
'degree_perc': 70.0
}
model.sample(5, conditions=conditions)
###Output
_____no_output_____ |
fundamentals/src/notebooks/030_remote_execution.ipynb | ###Markdown
Remote execution on compute cluster
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
target = ws.compute_targets["cpu-cluster"]
from azureml.core import ScriptRunConfig
script = ScriptRunConfig(
source_directory="030_scripts",
script="sklearn_vanilla_train.py",
compute_target=target,
environment=ws.environments["AzureML-sklearn-0.24-ubuntu18.04-py37-cpu"],
arguments=["--alpha", 0.01],
)
from azureml.core import Experiment
exp = Experiment(ws, "remote-script-execution")
run = exp.submit(script)
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Custom environment
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
import sklearn
diabetes_env = Environment(name="diabetes-training-env")
diabetes_env.python.conda_dependencies = CondaDependencies.create(
conda_packages=[
f"scikit-learn=={sklearn.__version__}",
"mlflow",
],
pip_packages=["azureml-defaults", "azureml-mlflow", "azureml-dataprep[pandas]"],
)
# Or if you had a yml conda file
# diabetes_env = Environment.from_conda_specification(
# name = "diabetes-training-env",
# file_path = "diabetes-conda.yml")
# Or even from Docker file
# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.environment.environment?view=azure-ml-py#from-dockerfile-name--dockerfile--conda-specification-none--pip-requirements-none-
diabetes_env.environment_variables["MY_VAR"] = "Hello from environment"
script = ScriptRunConfig(
source_directory="030_scripts",
script="sklearn_vanilla_train.py",
compute_target=target,
environment=diabetes_env,
arguments=["--alpha", 0.01],
)
exp = Experiment(ws, "remote-script-execution")
run = exp.submit(script)
# First time you will see 20_image_build_log.txt.
# The image will be stored in the container registry and will
# be reused in follow up calls.
run.wait_for_completion(show_output=True)
# Optionally, register the environment
diabetes_env.register(ws)
###Output
_____no_output_____
###Markdown
Consuming datasets
###Code
from azureml.core import Dataset
dataset = Dataset.get_by_name(ws, name="diabetes-tabular")
from azureml.core import ScriptRunConfig
script = ScriptRunConfig(
source_directory="030_scripts",
script="train_with_azureml_workspace.py",
compute_target=target,
environment=diabetes_env,
arguments=["--alpha", 0.01, dataset.as_named_input("diabetes_dataset")],
)
from azureml.core import Experiment
exp = Experiment(ws, "remote-script-execution")
run = exp.submit(script)
# You shouldn't see the 20_image_build_log.txt this time
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Hyper parameter tuning
###Code
# Note that we don't pass arguments
script = ScriptRunConfig(
source_directory="030_scripts",
script="sklearn_vanilla_train.py",
compute_target=target,
environment=diabetes_env,
)
from azureml.train.hyperdrive import HyperDriveConfig
from azureml.train.hyperdrive import RandomParameterSampling, uniform, PrimaryMetricGoal
param_sampling = RandomParameterSampling(
{
"alpha": uniform(0.00001, 0.1),
}
)
hd_config = HyperDriveConfig(
run_config=script,
hyperparameter_sampling=param_sampling,
primary_metric_name="training_rmse",
primary_metric_goal=PrimaryMetricGoal.MINIMIZE,
max_total_runs=20,
max_concurrent_runs=2,
)
experiment = Experiment(ws, "hyperdrive-experiment")
hyperdrive_run = experiment.submit(hd_config)
hyperdrive_run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Remote execution on compute cluster
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
target = ws.compute_targets["cpu-cluster"]
from azureml.core import ScriptRunConfig
script = ScriptRunConfig(
source_directory="030_scripts",
script="sklearn_vanilla_train.py",
compute_target=target,
environment=ws.environments["AzureML-sklearn-0.24-ubuntu18.04-py37-cpu"],
arguments=["--alpha", 0.01],
)
from azureml.core import Experiment
exp = Experiment(ws, "remote-script-execution")
run = exp.submit(script)
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Custom environment
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
import sklearn
diabetes_env = Environment(name="diabetes-training-env")
diabetes_env.python.conda_dependencies = CondaDependencies.create(
conda_packages=[
f"scikit-learn=={sklearn.__version__}",
"mlflow",
],
pip_packages=["azureml-defaults", "azureml-mlflow", "azureml-dataprep[pandas]"],
)
# Or if you had a yml conda file
# diabetes_env = Environment.from_conda_specification(
# name = "diabetes-training-env",
# file_path = "diabetes-conda.yml")
# Or even from Docker file
# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.environment.environment?view=azure-ml-py#from-dockerfile-name--dockerfile--conda-specification-none--pip-requirements-none-
diabetes_env.environment_variables["MY_VAR"] = "Hello from environment"
script = ScriptRunConfig(
source_directory="030_scripts",
script="sklearn_vanilla_train.py",
compute_target=target,
environment=diabetes_env,
arguments=["--alpha", 0.01],
)
exp = Experiment(ws, "remote-script-execution")
run = exp.submit(script)
# First time you will see 20_image_build_log.txt.
# The image will be stored in the container registry and will
# be reused in follow up calls.
run.wait_for_completion(show_output=True)
# Optionally, register the environment
diabetes_env.register(ws)
###Output
_____no_output_____
###Markdown
Consuming datasets
###Code
from azureml.core import Dataset
dataset = Dataset.get_by_name(ws, name="diabetes-tabular")
from azureml.core import ScriptRunConfig
script = ScriptRunConfig(
source_directory="030_scripts",
script="train_with_azureml_workspace.py",
compute_target=target,
environment=diabetes_env,
arguments=["--alpha", 0.01, dataset.as_named_input("diabetes_dataset")],
)
from azureml.core import Experiment
exp = Experiment(ws, "remote-script-execution")
run = exp.submit(script)
# You shouldn't see the 20_image_build_log.txt this time
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Hyper parameter tuning
###Code
# Note that we don't pass arguments
script = ScriptRunConfig(
source_directory="030_scripts",
script="sklearn_vanilla_train.py",
compute_target=target,
environment=diabetes_env,
)
from azureml.train.hyperdrive import HyperDriveConfig
from azureml.train.hyperdrive import RandomParameterSampling, uniform, PrimaryMetricGoal
param_sampling = RandomParameterSampling(
{
"alpha": uniform(0.00001, 0.1),
}
)
hd_config = HyperDriveConfig(
run_config=script,
hyperparameter_sampling=param_sampling,
primary_metric_name="training_rmse",
primary_metric_goal=PrimaryMetricGoal.MINIMIZE,
max_total_runs=20,
max_concurrent_runs=2,
)
experiment = Experiment(ws, "hyperdrive-experiment")
hyperdrive_run = experiment.submit(hd_config)
hyperdrive_run.wait_for_completion(show_output=True)
###Output
_____no_output_____ |
junpu_cnn.ipynb | ###Markdown
笔试《一》 使用tensorflow创建CNN对fashion-mnist数据集标签进行预测
###Code
from utils import mnist_reader
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
print(tf.__version__)
# 使用Github项目包含的函数读取数据
X_train, y_train = mnist_reader.load_mnist('data/fashion', kind='train')
X_test, y_test = mnist_reader.load_mnist('data/fashion', kind='t10k')
###Output
1.4.0
###Markdown
fashion-mnist数据集有10种类别,展示属于10个label中的一些图片
###Code
# 创建关于10种标签类别的dict
label_dict = {
0: 'T-shirt/top',
1: 'Trouser',
2: 'Pullover',
3: 'Dress',
4: 'Coat',
5: 'Sandal',
6: 'Shirt',
7: 'Sneaker',
8: 'Bag',
9: 'Ankle boot'
}
# 展示10种类别中的一件商品的图片
fig, axs = plt.subplots(2, 5)
images = []
for n in range(10):
pic_index = np.argwhere(y_train == n)[0]
images.append(axs[n//5,n%5])
images[n].imshow(X_train[pic_index].reshape(28,28), cmap=plt.cm.gray)
images[n].set_title(label_dict[n])
images[n].axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
数据预处理对图像数据进行归一化处理,提高训练表现。提取train数据中的前5000作为validation set用于训练使用,对train、validation和test集的标签做one-hot encoding便于训练。把扁平化的单列图片数组信息重整为tensorflow接受的(n, width, height, channel)形状。
###Code
from tensorflow.python.keras.utils import to_categorical
# 归一化数据
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
# 提取前5000为validation
X_train, X_valid = X_train[5000:], X_train[:5000]
y_train, y_valid = y_train[5000:], y_train[:5000]
# 设置tensor形状
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_valid = X_valid.reshape(X_valid.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
# one-hot encoding标签
y_train = to_categorical(y_train, 10)
y_valid = to_categorical(y_valid, 10)
y_test = to_categorical(y_test, 10)
# 打印出特征和标签的数据信息形状
print('训练数据特征集大小为{},训练数据标签集大小为{}'.format(X_train.shape, y_train.shape))
print('验证数据特征集大小为{},验证数据标签集大小为{}'.format(X_valid.shape, y_valid.shape))
print('测试数据特征集大小为{},测试数据标签集大小为{}'.format(X_test.shape, y_test.shape))
# 使用tensorflow的高级API keras包含的CNN网络模块做快速开发
from tensorflow.python.keras.layers import Conv2D, MaxPooling2D, Dropout, Activation
from tensorflow.python.keras.layers import Flatten, Dense, BatchNormalization
from tensorflow.python.keras.models import Sequential
# 设置序列模型
model = Sequential()
# 第一个卷积层,使用64个3x3的卷积核,same padding,最大池化层,BA层,Relu激活函数和20%的dropout
model.add(Conv2D(filters=64, kernel_size=[3, 3], input_shape=[28, 28, 1], padding='same'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
# 第一个卷积层,使用128个3x3的卷积核,same padding,最大池化层,BA层,Relu激活函数和20%的dropout
model.add(Conv2D(filters=128, kernel_size=[3, 3], padding='same'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
# 第一个卷积层,使用256个3x3的卷积核,same padding,最大池化层,BA层,Relu激活函数和20%的dropout
model.add(Conv2D(filters=256, kernel_size=[3, 3], padding='same'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
# 添加扁平层
model.add(Flatten())
# 添加全连接层
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
# 最后一层输出对10种标签的预测,使用softmax激活函数
model.add(Dense(10, activation='softmax'))
model.summary()
# 使用nadam梯度下降优化,交叉熵为损失函数,准确率为评估方法
model.compile(optimizer='nadam', loss='categorical_crossentropy', metrics=['accuracy'])
from tensorflow.python.keras.callbacks import ModelCheckpoint
# 设置100个epoch,100的batch size
epochs = 100
batch_size = 100
# 保留最佳模型
checkpointer = ModelCheckpoint(filepath = 'saved_models/weights.best.from_scratch.hdf5',
verbose = 1, save_best_only = True)
# 训练模型
model.fit(X_train, y_train,
validation_data=(X_valid, y_valid),
epochs=epochs,
batch_size=batch_size,
callbacks=[checkpointer],
verbose=1)
model.load_weights('saved_models/weights.best.from_scratch.hdf5')
# fashion_predict = [np.argmax(model.predict(np.expand_dims(img, axis=0))) for img in X_test]
# test_accuracy = 100 * np.sum(np.array(fashion_predict) == np.argmax(y_test, axis=1))/len(fashion_predict)
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
print('训练的CNN模型测试集损失为{:4},测试集准确率为{:4}%'.format(loss, accuracy*100))
###Output
测试集损失为0.22500082306861877,测试集准确率为92.11%
###Markdown
展示模型分类错误的前20个图像
###Code
n_index = 0
miss_classified = []
# 记录前20个分类错误的项目
while len(miss_classified) < 20:
predicted = np.argmax(model.predict(np.expand_dims(X_test[n_index], axis=0)))
actual = np.argmax(y_test[n_index])
if predicted != actual:
miss_classified.append((n_index, predicted, actual))
n_index +=1
# 展示错误分类的图片
fig, axs = plt.subplots(4, 5)
images = []
for n in range(20):
pic_index = miss_classified[n]
images.append(axs[n//5,n%5])
images[n].imshow(X_test[pic_index[0]].reshape(28,28), cmap=plt.cm.gray)
images[n].set_title('{}\n{}'.format(label_dict[pic_index[1]],
label_dict[pic_index[2]]), color='r')
images[n].axis('off')
fig.set_size_inches(14, 10, forward=True)
fig.tight_layout()
plt.show()
###Output
_____no_output_____ |
Ocean_DeepLearning_09_02_2021.ipynb | ###Markdown
###Code
import keras # Importa a biblioteca Keras
from keras.datasets import mnist # Base de Dados MNIST
from tensorflow.python.keras import Sequential # Arquitetura da nossa rede neura
from tensorflow.python.keras.layers import Dense, Dropout # Neurônio (base da rede) e Regularizador (evita overfitting)
from tensorflow.compat.v1.keras.optimizers import RMSprop # Otimizador (back propagation)
# Carregando os dados de treino e teste
(x_treino, y_treino), (x_teste, y_teste) = mnist.load_data()
# Após importar os dados, é importante dar uma analisada pra ver o que temos no dataset
# e como ele está estruturado
print("Quantidade de imagens para treino:", len(x_treino))
print("Quantidade de imagens para teste:", len(x_teste))
print("Tipo de x_treino:", type(x_treino))
primeira_imagem = x_treino[0]
representacao_primeira_imagem = y_treino[0]
print("O que a imagem 0 representa:", representacao_primeira_imagem)
print("Formato da primeira imagem:", primeira_imagem.shape)
print(primeira_imagem)
###Output
Quantidade de imagens para treino: 60000
Quantidade de imagens para teste: 10000
Tipo de x_treino: <class 'numpy.ndarray'>
O que a imagem 0 representa: 5
Formato da primeira imagem: (28, 28)
[[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 3 18 18 18 126 136
175 26 166 255 247 127 0 0 0 0]
[ 0 0 0 0 0 0 0 0 30 36 94 154 170 253 253 253 253 253
225 172 253 242 195 64 0 0 0 0]
[ 0 0 0 0 0 0 0 49 238 253 253 253 253 253 253 253 253 251
93 82 82 56 39 0 0 0 0 0]
[ 0 0 0 0 0 0 0 18 219 253 253 253 253 253 198 182 247 241
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 80 156 107 253 253 205 11 0 43 154
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 14 1 154 253 90 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 139 253 190 2 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 11 190 253 70 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 35 241 225 160 108 1
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 81 240 253 253 119
25 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 45 186 253 253
150 27 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 93 252
253 187 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 249
253 249 64 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 46 130 183 253
253 207 2 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 39 148 229 253 253 253
250 182 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 24 114 221 253 253 253 253 201
78 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 23 66 213 253 253 253 253 198 81 2
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 18 171 219 253 253 253 253 195 80 9 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 55 172 226 253 253 253 253 244 133 11 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 136 253 253 253 212 135 132 16 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]]
###Markdown
###Code
import keras # Importa a biblioteca Keras
from keras.datasets import mnist # Base de Dados MNIST
from tensorflow.python.keras import Sequential # Arquitetura da nossa rede neura
from tensorflow.python.keras.layers import Dense, Dropout # Neurônio (base da rede) e Regularizador (evita overfitting)
from tensorflow.compat.v1.keras.optimizers import RMSprop # Otimizador (back propagation)
# Carregando os dados de treino e teste
(x_treino, y_treino), (x_teste, y_teste) = mnist.load_data()
# Após importar os dados, é importante dar uma analisada pra ver o que temos no dataset
# e como ele está estruturado
print("Quantidade de imagens para treino:", len(x_treino))
print("Quantidade de imagens para teste:", len(x_teste))
print("Tipo de x_treino:", type(x_treino))
primeira_imagem = x_treino[0]
representacao_primeira_imagem = y_treino[0]
print("O que a imagem 0 representa:", representacao_primeira_imagem)
print("Formato da primeira imagem:", primeira_imagem.shape, type(primeira_imagem.shape))
print(primeira_imagem)
import matplotlib.pyplot as plt
indice = 12000
print("A imagem representa:", y_treino[indice])
plt.imshow(x_treino[indice], cmap=plt.cm.binary)
# Fluxo para construção de rede neural
# - Organizar a camada de entrada (input)
# - Organizar a camada de saída (output)
# - Estruturar a nossa rede neural
# - Treinar o modelo
# - Fazer as previsões
# Achatando a matriz de pixels e transformando em uma única lista
quantidade_treino = len(x_treino) # 60000
quantidade_teste = len(x_teste) # 10000
resolucao_imagem = x_treino[0].shape # (28, 28)
resolucao_total = resolucao_imagem[0] * resolucao_imagem[1] # 28 * 28 = 784
x_treino = x_treino.reshape(quantidade_treino, resolucao_total)
x_teste = x_teste.reshape(quantidade_teste, resolucao_total)
print("Quantidade de itens em x_treino[0]:", len(x_treino[0]))
# Como ficou x_treino[0]?
print(x_treino[0])
# Normalização dos dados
# 255 vire 1
# 127 vire 0.5
# 0 vire 0
# E assim por diante
x_treino = x_treino.astype('float32') # Converte toda a base de x_treino de uint_8 para float32
x_teste = x_teste.astype('float32') # Converte toda a base de x_teste de uint_8 para float32
x_treino /= 255 # Divide todos os 60000 valores de x_treino por 255 e armazena o resultado direto em x_treino
x_teste /= 255 # Divide todos os 10000 valores de x_teste por 255 e armazena o resultado direto em x_teste
# Acessamos a primeira imagem, disponível em x_treino[0], e depois exibimos qual o valor está no pixel 350 da imagem
# Lembrando que cada linha possui 28 pixels (0-27), portanto ao acessar o índice 28, estamos acessando o 1º pixel da segunda linha.
print(x_treino[0][350], type(x_treino[0][350]))
print(x_treino[0])
# Preparação da camada de saída (output)
# Quais são as possibilidades de saída? Números de 0 a 9
# Quantos itens temos? 10 itens
# Números -> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# Número 0 -> [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
# Número 1 -> [0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
# Número 9 -> [0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
valores_unicos = set(y_treino) # {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
print(valores_unicos)
quantidade_valores_unicos = len(valores_unicos) # 10
print(quantidade_valores_unicos)
print("y_treino[0] antes:", y_treino[0])
y_treino = keras.utils.to_categorical(y_treino, quantidade_valores_unicos)
y_teste = keras.utils.to_categorical(y_teste, quantidade_valores_unicos)
print("y_treino[0] depois:", y_treino[0])
# Criando o modelo da rede neural
model = Sequential()
# Primeira hidden layer
# 30 neurônios
# Função de ativação: ReLU
# Como estamos na primeira hidden layer, precisamos informar o formato da camada de entrada (input)
model.add(Dense(30, activation='relu', input_shape=(resolucao_total,)))
# Adicionamos um regularizador, que ajuda a evitar o overfitting
# No caso, será o Dropout
model.add(Dropout(0.2))
# Segunda hidden layer
# 20 neurônios
# Função de ativação: ReLU
model.add(Dense(20, activation='relu'))
# Mais um regularizador depois da segunda hidden layer
model.add(Dropout(0.2))
# Finalizamos com a camada de saída (output), informando a quantidade de valores únicos que, no caso, é 10
# Função de ativação: Como ReLU deve usada apenas nas hidden layers, iremos utilizar a função Softmax
model.add(Dense(quantidade_valores_unicos, activation='softmax'))
# Exibe o resumo do modelo criado
model.summary()
# Compila e treina o modelo
# Precisamos informar qual será:
# Função de erro
# Algoritmo de backpropagation
# Dados para Treino (imagens normalizadas e labels categorizadas)
# Dados para Teste (imagens normalizadas e labels categorizadas)
# Quantidade de épocas que queremos rodar (sendo 1 época equivalente a analisar TODAS as imagens de treino)
# Tamanho de cada 'batch'
# -> Supondo que temos 100 imagens
# -> 100 imagens pode ser muito pesado para processar de uma única vez
# -> Portanto, quebramos em 'batches' de 10 imagens, cada, e processamos 10 imagens por vez
# -> Geralmente, o tamanho dos batches deve ser potência de 2 (2, 4, 8, 16, 32, 64, 128, ...), para melhorar performance
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
# Treina o modelo
history = model.fit(x_treino, y_treino,
batch_size=128,
epochs=10,
verbose=1,
validation_data=(x_teste, y_teste))
# Fazendo nossas previsões
indice = 1234
print("Valor categórico em y_teste[indice]:", y_teste[indice])
# Como o model.predict aceita mais de uma imagem ao mesmo tempo
# e queremos apenas analisar uma imagem, precisamos fazer um reshape, em que [0, 0, 0, 0], vira [[0, 0, 0, 0]]
imagem = x_teste[indice].reshape((1, resolucao_total))
# Fazemos a previsão da imagem
prediction = model.predict(imagem)
print("Previsão:", prediction)
# Transformar a previsão em algo que conseguimos entender de forma mais fácil
import numpy as np
# Convertemos a previsão que está em porcentagens, pegando o maior valor disponível
prediction_class = np.argmax(prediction, axis=-1)
print("Previsão ajustada:", prediction_class)
# Recarregamos o MNIST e exibimos a imagem original usando o matplotlib carregado anteriormente
(x_treino_img, y_treino_img), (x_teste_img, y_teste_img) = mnist.load_data()
plt.imshow(x_teste_img[indice], cmap=plt.cm.binary)
###Output
Valor categórico em y_teste[indice]: [0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
Previsão: [[3.6610167e-02 1.4030529e-03 3.6900729e-02 1.8894464e-01 1.7412652e-04
3.2137614e-01 1.6743107e-02 1.2363173e-04 3.9625725e-01 1.4672127e-03]]
Previsão ajustada: [8]
###Markdown
###Code
import keras # Importa a biblioteca Keras
from keras.datasets import mnist # Base de Dados MNIST
from tensorflow.python.keras import Sequential # Arquitetura da nossa rede neura
from tensorflow.python.keras.layers import Dense, Dropout # Neurônio (base da rede) e Regularizador (evita overfitting)
from tensorflow.compat.v1.keras.optimizers import RMSprop # Otimizador (back propagation)
# Carregando os dados de treino e teste
(x_treino, y_treino), (x_teste, y_teste) = mnist.load_data()
# Após importar os dados, é importnate dar uma analisada para ver o que temos no dataset
# e como ele está estruturado
print("Quantidade de imagens para treino", len(x_treino))
print("Quantidade de imagens para teste", len(x_teste))
print("Tipo de x_treino", type(x_treino))
primeira_imagem = x_treino[0]
representacao_primeira_imagem = y_treino[0]
print("O que tem a imagem 0 representa:",representacao_primeira_imagem)
print("Formato da primeira imagem:",primeira_imagem.shape, type(primeira_imagem.shape))
print(primeira_imagem)
import matplotlib.pyplot as plt
indice = 12000
print("A imagem representa:",y_treino[indice])
plt.imshow(x_treino[indice], cmap=plt.cm.binary)
###Output
_____no_output_____ |
Beginner_3/9. Exception Handling.ipynb | ###Markdown
9.1 [Try](https://docs.python.org/3.5/reference/compound_stmts.htmltry)Even if the syntax is correct, a line of code may still produce an error. It doesn't mean the code wouldn't work. Python's `try` statement is used to handle [Exceptions](https://docs.python.org/3.5/reference/executionmodel.htmlexceptions) detected during execution.
###Code
while True:
try:
x = int(input("Please enter a number: "))
except ValueError:
print("Oops! Not a valid number. Try again...")
###Output
Please enter a number: 12512
Please enter a number: AA
Oops! Not a valid number. Try again...
Please enter a number: ga
Oops! Not a valid number. Try again...
Please enter a number: 6123
###Markdown
9.2 ExceptThe `except` clause allows you to specify exception handlers. Followed by an Exception class, it allows you to catch specific exceptions and control how each is handled.- In Python 3.x, the exception class is aliased using the `as` keyword followed by a variable for storage and use in the next block of statements- In Python 2.x, the exception class is separated from the variable by a comma 9.2.1 [Exception Classes](https://docs.python.org/3.5/library/exceptions.htmlbltin-exceptions)Python has built-in Exception classes that you can catch with `except`. Multiple `except` clauses can be used in a `try` statement.Once you learn about `classes`, you'll learn that new exceptions can be created easily since these `Exceptions` are just classes:
###Code
class NotANumberException(Exception):
pass
###Output
_____no_output_____
###Markdown
9.2.2 Raising ExceptionsWhen errors occur, a `raise` statement can be used to halt the execution of the script. `raise` statements are followed by Exception classes. Exceptions can accept an argument that is displayed when the exception occurs. This is the new-style of raising an error. The old-style was writing the exception followed by the argument, separated by a comma.- In Python 3.x, only accepts the new-style of writing exceptions- In Python 2.x, it is possible to write exceptions using old and new styles
###Code
while True:
try:
x = int(input("Please enter a number: "))
except ValueError as e: # exceptions in Python 3.x are aliased
raise NotANumberException(e) # only new-style works
# raise NotANumberException("A number was not entered.")
else:
break
finally:
print("Done...")
###Output
Please enter a number: a
Done...
###Markdown
In Python 2.x, it was also possible to `raise` an exception like below:
###Code
while True:
try:
x = int(input("Please enter a number: "))
except ValueError, e: # exceptions in Python 2.x separated by comma
raise NotANumberException, e # both old and new styles work in Python 2.x
else:
break
finally:
print("Done...")
###Output
_____no_output_____
###Markdown
9.3 ElseThe optional `else` clause executes if all the lines within the `try` clause execute without errors. Therefore, no exceptions occured. Being after the scope of the `try` statement, exceptions within the `else` clause are no longer handled by any `except` clause above it. 9.4 FinallyThere is an optional `finally` statement that executes after handling `except` and `else` clauses. Exceptions not handled within any clauses are temporarily saved. The `finally` clause is executed before re-raising unhandled exceptions unless it executes a `return` or `break` statement.
###Code
def f():
try:
1/0
finally:
return 42
f()
# executes a return statement therefore error is not raised
def f():
try:
1/0
finally:
print(42)
# return 42
f()
# 42 still gets printed before the error is raised
###Output
42
###Markdown
9.1 [Try](https://docs.python.org/3.5/reference/compound_stmts.htmltry)Even if the syntax is correct, a line of code may still produce an error. It doesn't mean the code wouldn't work. Python's `try` statement is used to handle [Exceptions](https://docs.python.org/3.5/reference/executionmodel.htmlexceptions) detected during execution.
###Code
while True:
try:
x = int(input("Please enter a number: "))
except ValueError:
print("Oops! Not a valid number. Try again...")
break
###Output
Please enter a number: 10
Please enter a number: ds
Oops! Not a valid number. Try again...
###Markdown
9.2 ExceptThe `except` clause allows you to specify exception handlers. Followed by an Exception class, it allows you to catch specific exceptions and control how each is handled.- In Python 3.x, the exception class is aliased using the `as` keyword followed by a variable for storage and use in the next block of statements- In Python 2.x, the exception class is separated from the variable by a comma 9.2.1 [Exception Classes](https://docs.python.org/3.5/library/exceptions.htmlbltin-exceptions)Python has built-in Exception classes that you can catch with `except`. Multiple `except` clauses can be used in a `try` statement.Once you learn about `classes`, you'll learn that new exceptions can be created easily since these `Exceptions` are just classes:
###Code
class NotANumberException(Exception):
pass
###Output
_____no_output_____
###Markdown
9.2.2 Raising ExceptionsWhen errors occur, a `raise` statement can be used to halt the execution of the script. `raise` statements are followed by Exception classes. Exceptions can accept an argument that is displayed when the exception occurs. This is the new-style of raising an error. The old-style was writing the exception followed by the argument, separated by a comma.- In Python 3.x, only accepts the new-style of writing exceptions- In Python 2.x, it is possible to write exceptions using old and new styles
###Code
while True:
try:
x = int(input("Please enter a number: "))
except ValueError as e: # exceptions in Python 3.x are aliased
raise NotANumberException(e) # only new-style works
# raise NotANumberException("A number was not entered.")
else:
break
finally:
print("Done...")
###Output
Please enter a number: as
Done...
###Markdown
In Python 2.x, it was also possible to `raise` an exception like below:
###Code
while True:
try:
x = int(input("Please enter a number: "))
except ValueError, e: # exceptions in Python 2.x separated by comma
raise NotANumberException, e # both old and new styles work in Python 2.x
else:
break
finally:
print("Done...")
###Output
_____no_output_____
###Markdown
9.3 ElseThe optional `else` clause executes if all the lines within the `try` clause execute without errors. Therefore, no exceptions occured. Being after the scope of the `try` statement, exceptions within the `else` clause are no longer handled by any `except` clause above it. 9.4 FinallyThere is an optional `finally` statement that executes after handling `except` and `else` clauses. Exceptions not handled within any clauses are temporarily saved. The `finally` clause is executed before re-raising unhandled exceptions unless it executes a `return` or `break` statement.
###Code
def f():
try:
1/0
finally:
return 42
f()
# executes a return statement therefore error is not raised
def f():
try:
1/0
finally:
print(42)
# return 42
f()
# 42 still gets printed before the error is raised
###Output
42
|
M_accelerate_6lastlayer-8fold_91.41-Copy5.ipynb | ###Markdown
MobileNet - Pytorch Step 1: Prepare data
###Code
# MobileNet-Pytorch
import argparse
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
from torchvision import datasets, transforms
from torch.autograd import Variable
from torch.utils.data.sampler import SubsetRandomSampler
from sklearn.metrics import accuracy_score
#from mobilenets import mobilenet
use_cuda = torch.cuda.is_available()
use_cudause_cud = torch.cuda.is_available()
dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
# Train, Validate, Test. Heavily inspired by Kevinzakka https://github.com/kevinzakka/DenseNet/blob/master/data_loader.py
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
valid_size=0.1
# define transforms
valid_transform = transforms.Compose([
transforms.ToTensor(),
normalize
])
train_transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize
])
# load the dataset
train_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=train_transform)
valid_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=valid_transform)
num_train = len(train_dataset)
indices = list(range(num_train))
split = int(np.floor(valid_size * num_train)) #5w张图片的10%用来当做验证集
np.random.seed(42)# 42
np.random.shuffle(indices) # 随机乱序[0,1,...,49999]
train_idx, valid_idx = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_idx) # 这个很有意思
valid_sampler = SubsetRandomSampler(valid_idx)
###################################################################################
# ------------------------- 使用不同的批次大小 ------------------------------------
###################################################################################
show_step=2 # 批次大,show_step就小点
max_epoch=80 # 训练最大epoch数目
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=256, sampler=train_sampler)
valid_loader = torch.utils.data.DataLoader(valid_dataset,
batch_size=256, sampler=valid_sampler)
test_transform = transforms.Compose([
transforms.ToTensor(), normalize
])
test_dataset = datasets.CIFAR10(root="data",
train=False,
download=True,transform=test_transform)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=256,
shuffle=True)
###Output
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Step 2: Model Config 32 缩放5次到 1x1@1024 From https://github.com/kuangliu/pytorch-cifar import torchimport torch.nn as nnimport torch.nn.functional as Fclass Block(nn.Module): '''Depthwise conv + Pointwise conv''' def __init__(self, in_planes, out_planes, stride=1): super(Block, self).__init__() 分组卷积数=输入通道数 self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False) self.bn1 = nn.BatchNorm2d(in_planes) self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False) one_conv_kernel_size = 3 self.conv1D= nn.Conv1d(1, out_planes, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1,bias=False) 在__init__初始化 self.bn2 = nn.BatchNorm2d(out_planes) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) -------------------------- Attention ----------------------- w = F.avg_pool2d(x,x.shape[-1]) 最好在初始化层定义好 print(w.shape) [bs,in_Channel,1,1] w = w.view(w.shape[0],1,w.shape[1]) [bs,1,in_Channel] one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) 在__init__初始化 [bs,out_channel,in_Channel] w = self.conv1D(w) w = 0.5*F.tanh(w) [-0.5,+0.5] -------------- softmax --------------------------- print(w.shape) w = w.view(w.shape[0],w.shape[1],w.shape[2],1,1) print(w.shape) ------------------------- fusion -------------------------- out=out.view(out.shape[0],1,out.shape[1],out.shape[2],out.shape[3]) print("x size:",out.shape) out=out*w print("after fusion x size:",out.shape) out=out.sum(dim=2) out = F.relu(self.bn2(out)) return outclass MobileNet(nn.Module): (128,2) means conv planes=128, conv stride=2, by default conv stride=1 cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024] def __init__(self, num_classes=10): super(MobileNet, self).__init__() self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(32) self.layers = self._make_layers(in_planes=32) 自动化构建层 self.linear = nn.Linear(1024, num_classes) def _make_layers(self, in_planes): layers = [] for x in self.cfg: out_planes = x if isinstance(x, int) else x[0] stride = 1 if isinstance(x, int) else x[1] layers.append(Block(in_planes, out_planes, stride)) in_planes = out_planes return nn.Sequential(*layers) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = self.layers(out) out = F.avg_pool2d(out, 2) out = out.view(out.size(0), -1) out = self.linear(out) return out
###Code
# 32 缩放5次到 1x1@1024
# From https://github.com/kuangliu/pytorch-cifar
import torch
import torch.nn as nn
import torch.nn.functional as F
class Block_Attention_HALF(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block_Attention_HALF, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
#------------------------ 一半 ------------------------------
self.conv2 = nn.Conv2d(in_planes, int(out_planes*0.125), kernel_size=1, stride=1, padding=0, bias=True)
#------------------------ 另一半 ----------------------------
one_conv_kernel_size = 17 # [3,7,9]
self.conv1D= nn.Conv1d(1, int(out_planes*0.875), one_conv_kernel_size, stride=1,padding=8,groups=1,dilation=1,bias=True) # 在__init__初始化
#------------------------------------------------------------
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu6(self.bn1(self.conv1(x)))
# -------------------------- Attention -----------------------
w = F.avg_pool2d(x,x.shape[-1]) #最好在初始化层定义好
#print(w.shape)
# [bs,in_Channel,1,1]
in_channel=w.shape[1]
#w = w.view(w.shape[0],1,w.shape[1])
# [bs,1,in_Channel]
# 对这批数据取平均 且保留第0维
#w= w.mean(dim=0,keepdim=True)
# MAX=w.shape[0]
# NUM=torch.floor(MAX*torch.rand(1)).long()
# if NUM>=0 and NUM<MAX:
# w=w[NUM]
# else:
# w=w[0]
#w=w[0]-torch.mean(w[0])
w=torch.randn(w[0].shape).cuda()*0.01
a=torch.randn(1).cuda()*0.1
if a>0.37:
print(w.shape)
print(w)
w=w.view(1,1,in_channel)
# [bs=1,1,in_Channel]
# one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) # 在__init__初始化
# [bs=1,out_channel//2,in_Channel]
w = self.conv1D(w)
# [bs=1,out_channel//2,in_Channel]
#-------------------------------------
w = 0.1*F.tanh(w) # [-0.5,+0.5]
if a>0.37:
print(w.shape)
print(w)
# [bs=1,out_channel//2,in_Channel]
w=w.view(w.shape[1],w.shape[2],1,1)
# [out_channel//2,in_Channel,1,1]
# -------------- softmax ---------------------------
#print(w.shape)
# ------------------------- fusion --------------------------
# conv 1x1
out_1=self.conv2(out)
out_2=F.conv2d(out,w,bias=None,stride=1,groups=1,dilation=1)
out=torch.cat([out_1,out_2],1)
# ----------------------- 试一试不要用relu -------------------------------
out = F.relu6(self.bn2(out))
return out
class Block_Attention(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block_Attention, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
#self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
one_conv_kernel_size = 17 # [3,7,9]
self.conv1D= nn.Conv1d(1, out_planes, one_conv_kernel_size, stride=1,padding=8,groups=1,dilation=1,bias=False) # 在__init__初始化
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
# -------------------------- Attention -----------------------
w = F.avg_pool2d(x,x.shape[-1]) #最好在初始化层定义好
#print(w.shape)
# [bs,in_Channel,1,1]
in_channel=w.shape[1]
#w = w.view(w.shape[0],1,w.shape[1])
# [bs,1,in_Channel]
# 对这批数据取平均 且保留第0维
#w= w.mean(dim=0,keepdim=True)
# MAX=w.shape[0]
# NUM=torch.floor(MAX*torch.rand(1)).long()
# if NUM>=0 and NUM<MAX:
# w=w[NUM]
# else:
# w=w[0]
w=w[0]
w=w.view(1,1,in_channel)
# [bs=1,1,in_Channel]
# one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) # 在__init__初始化
# [bs=1,out_channel,in_Channel]
w = self.conv1D(w)
# [bs=1,out_channel,in_Channel]
w = 0.5*F.tanh(w) # [-0.5,+0.5]
# [bs=1,out_channel,in_Channel]
w=w.view(w.shape[1],w.shape[2],1,1)
# [out_channel,in_Channel,1,1]
# -------------- softmax ---------------------------
#print(w.shape)
# ------------------------- fusion --------------------------
# conv 1x1
out=F.conv2d(out,w,bias=None,stride=1,groups=1,dilation=1)
out = F.relu(self.bn2(out))
return out
class Block(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
return out
class MobileNet(nn.Module):
# (128,2) means conv planes=128, conv stride=2, by default conv stride=1
#cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024]
#cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), [1024,1]]
cfg = [64, (128,2), 128, 256, 256, (512,2), 512, [512,1], [512,1],[512,1], [512,1], [1024,1], [1024,1]]
def __init__(self, num_classes=10):
super(MobileNet, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.layers = self._make_layers(in_planes=32) # 自动化构建层
self.linear = nn.Linear(1024, num_classes)
def _make_layers(self, in_planes):
layers = []
for x in self.cfg:
if isinstance(x, int):
out_planes = x
stride = 1
layers.append(Block(in_planes, out_planes, stride))
elif isinstance(x, tuple):
out_planes = x[0]
stride = x[1]
layers.append(Block(in_planes, out_planes, stride))
# AC层通过list存放设置参数
elif isinstance(x, list):
out_planes= x[0]
stride = x[1] if len(x)==2 else 1
layers.append(Block_Attention_HALF(in_planes, out_planes, stride))
else:
pass
in_planes = out_planes
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layers(out)
out = F.avg_pool2d(out, 8)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
# From https://github.com/Z0m6ie/CIFAR-10_PyTorch
#model = mobilenet(num_classes=10, large_img=False)
# From https://github.com/kuangliu/pytorch-cifar
if torch.cuda.is_available():
model=MobileNet(10).cuda()
else:
model=MobileNet(10)
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)
#scheduler = StepLR(optimizer, step_size=70, gamma=0.1)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[50,70,75,80], gamma=0.1)
criterion = nn.CrossEntropyLoss()
# Implement validation
def train(epoch):
model.train()
#writer = SummaryWriter()
for batch_idx, (data, target) in enumerate(train_loader):
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
correct = 0
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
loss = criterion(output, target)
loss.backward()
accuracy = 100. * (correct.cpu().numpy()/ len(output))
optimizer.step()
if batch_idx % 5*show_step == 0:
# if batch_idx % 2*show_step == 0:
# print(model.layers[1].conv1D.weight.shape)
# print(model.layers[1].conv1D.weight[0:2][0:2])
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accuracy: {:.2f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item(), accuracy))
# f1=open("Cifar10_INFO.txt","a+")
# f1.write("\n"+'Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accuracy: {:.2f}'.format(
# epoch, batch_idx * len(data), len(train_loader.dataset),
# 100. * batch_idx / len(train_loader), loss.item(), accuracy))
# f1.close()
#writer.add_scalar('Loss/Loss', loss.item(), epoch)
#writer.add_scalar('Accuracy/Accuracy', accuracy, epoch)
scheduler.step()
def validate(epoch):
model.eval()
#writer = SummaryWriter()
valid_loss = 0
correct = 0
for data, target in valid_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
valid_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
valid_loss /= len(valid_idx)
accuracy = 100. * correct.cpu().numpy() / len(valid_idx)
print('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
valid_loss, correct, len(valid_idx),
100. * correct / len(valid_idx)))
# f1=open("Cifar10_INFO.txt","a+")
# f1.write('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
# valid_loss, correct, len(valid_idx),
# 100. * correct / len(valid_idx)))
# f1.close()
#writer.add_scalar('Loss/Validation_Loss', valid_loss, epoch)
#writer.add_scalar('Accuracy/Validation_Accuracy', accuracy, epoch)
return valid_loss, accuracy
# Fix best model
def test(epoch):
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct.cpu().numpy() / len(test_loader.dataset)))
# f1=open("Cifar10_INFO.txt","a+")
# f1.write('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
# test_loss, correct, len(test_loader.dataset),
# 100. * correct.cpu().numpy() / len(test_loader.dataset)))
# f1.close()
def save_best(loss, accuracy, best_loss, best_acc):
if best_loss == None:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
elif loss < best_loss and accuracy > best_acc:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
return best_loss, best_acc
# Fantastic logger for tensorboard and pytorch,
# run tensorboard by opening a new terminal and run "tensorboard --logdir runs"
# open tensorboard at http://localhost:6006/
from tensorboardX import SummaryWriter
best_loss = None
best_acc = None
import time
SINCE=time.time()
for epoch in range(max_epoch):
train(epoch)
loss, accuracy = validate(epoch)
best_loss, best_acc = save_best(loss, accuracy, best_loss, best_acc)
NOW=time.time()
DURINGS=NOW-SINCE
SINCE=NOW
print("the time of this epoch:[{} s]".format(DURINGS))
if epoch>=10 and (epoch-10)%2==0:
test(epoch)
# writer = SummaryWriter()
# writer.export_scalars_to_json("./all_scalars.json")
# writer.close()
#---------------------------- Test ------------------------------
test(epoch)
###Output
Train Epoch: 0 [0/50000 (0%)] Loss: 2.316704, Accuracy: 7.81
Train Epoch: 0 [1280/50000 (3%)] Loss: 2.353208, Accuracy: 7.81
Train Epoch: 0 [2560/50000 (6%)] Loss: 2.299966, Accuracy: 10.16
Train Epoch: 0 [3840/50000 (9%)] Loss: 2.243143, Accuracy: 12.11
Train Epoch: 0 [5120/50000 (11%)] Loss: 2.133137, Accuracy: 17.97
Train Epoch: 0 [6400/50000 (14%)] Loss: 1.998923, Accuracy: 18.75
Train Epoch: 0 [7680/50000 (17%)] Loss: 2.116172, Accuracy: 18.36
Train Epoch: 0 [8960/50000 (20%)] Loss: 2.136685, Accuracy: 13.28
Train Epoch: 0 [10240/50000 (23%)] Loss: 2.061302, Accuracy: 17.19
Train Epoch: 0 [11520/50000 (26%)] Loss: 1.919407, Accuracy: 27.34
Train Epoch: 0 [12800/50000 (28%)] Loss: 1.915111, Accuracy: 25.78
Train Epoch: 0 [14080/50000 (31%)] Loss: 2.018020, Accuracy: 17.19
Train Epoch: 0 [15360/50000 (34%)] Loss: 1.990862, Accuracy: 22.66
Train Epoch: 0 [16640/50000 (37%)] Loss: 1.868446, Accuracy: 23.83
Train Epoch: 0 [17920/50000 (40%)] Loss: 1.933813, Accuracy: 21.09
Train Epoch: 0 [19200/50000 (43%)] Loss: 1.868474, Accuracy: 22.66
Train Epoch: 0 [20480/50000 (45%)] Loss: 1.916965, Accuracy: 23.44
Train Epoch: 0 [21760/50000 (48%)] Loss: 1.861632, Accuracy: 30.86
Train Epoch: 0 [23040/50000 (51%)] Loss: 1.805478, Accuracy: 29.69
Train Epoch: 0 [24320/50000 (54%)] Loss: 1.812276, Accuracy: 27.73
Train Epoch: 0 [25600/50000 (57%)] Loss: 1.898470, Accuracy: 27.73
Train Epoch: 0 [26880/50000 (60%)] Loss: 1.806492, Accuracy: 26.56
Train Epoch: 0 [28160/50000 (62%)] Loss: 1.776056, Accuracy: 30.08
Train Epoch: 0 [29440/50000 (65%)] Loss: 1.734005, Accuracy: 23.83
Train Epoch: 0 [30720/50000 (68%)] Loss: 1.742873, Accuracy: 30.08
Train Epoch: 0 [32000/50000 (71%)] Loss: 1.627363, Accuracy: 32.81
Train Epoch: 0 [33280/50000 (74%)] Loss: 1.667299, Accuracy: 31.64
Train Epoch: 0 [34560/50000 (77%)] Loss: 1.704336, Accuracy: 29.30
Train Epoch: 0 [35840/50000 (80%)] Loss: 1.783468, Accuracy: 30.47
Train Epoch: 0 [37120/50000 (82%)] Loss: 1.812645, Accuracy: 28.91
Train Epoch: 0 [38400/50000 (85%)] Loss: 1.729903, Accuracy: 29.30
Train Epoch: 0 [39680/50000 (88%)] Loss: 1.669595, Accuracy: 37.11
Train Epoch: 0 [40960/50000 (91%)] Loss: 1.754832, Accuracy: 31.25
Train Epoch: 0 [42240/50000 (94%)] Loss: 1.694833, Accuracy: 35.16
Train Epoch: 0 [43520/50000 (97%)] Loss: 1.660976, Accuracy: 35.16
Train Epoch: 0 [35000/50000 (99%)] Loss: 1.677888, Accuracy: 34.50
Validation set: Average loss: 1.7957, Accuracy: 1459/5000 (29.00%)
the time of this epoch:[36.57642912864685 s]
Train Epoch: 1 [0/50000 (0%)] Loss: 1.636138, Accuracy: 38.28
Train Epoch: 1 [1280/50000 (3%)] Loss: 1.673028, Accuracy: 34.38
Train Epoch: 1 [2560/50000 (6%)] Loss: 1.543464, Accuracy: 35.55
Train Epoch: 1 [3840/50000 (9%)] Loss: 1.639641, Accuracy: 37.89
Train Epoch: 1 [5120/50000 (11%)] Loss: 1.619625, Accuracy: 36.72
Train Epoch: 1 [6400/50000 (14%)] Loss: 1.672857, Accuracy: 38.67
Train Epoch: 1 [7680/50000 (17%)] Loss: 1.631667, Accuracy: 33.98
Train Epoch: 1 [8960/50000 (20%)] Loss: 1.540546, Accuracy: 37.11
Train Epoch: 1 [10240/50000 (23%)] Loss: 1.630267, Accuracy: 36.33
Train Epoch: 1 [11520/50000 (26%)] Loss: 1.642856, Accuracy: 36.33
Train Epoch: 1 [12800/50000 (28%)] Loss: 1.517291, Accuracy: 41.80
Train Epoch: 1 [14080/50000 (31%)] Loss: 1.513350, Accuracy: 39.06
Train Epoch: 1 [15360/50000 (34%)] Loss: 1.489782, Accuracy: 41.02
Train Epoch: 1 [16640/50000 (37%)] Loss: 1.626507, Accuracy: 34.38
Train Epoch: 1 [17920/50000 (40%)] Loss: 1.560713, Accuracy: 43.36
Train Epoch: 1 [19200/50000 (43%)] Loss: 1.497349, Accuracy: 41.80
Train Epoch: 1 [20480/50000 (45%)] Loss: 1.527145, Accuracy: 39.84
Train Epoch: 1 [21760/50000 (48%)] Loss: 1.533380, Accuracy: 40.23
Train Epoch: 1 [23040/50000 (51%)] Loss: 1.443560, Accuracy: 45.70
Train Epoch: 1 [24320/50000 (54%)] Loss: 1.387918, Accuracy: 45.31
Train Epoch: 1 [25600/50000 (57%)] Loss: 1.578513, Accuracy: 39.45
Train Epoch: 1 [26880/50000 (60%)] Loss: 1.557259, Accuracy: 40.23
Train Epoch: 1 [28160/50000 (62%)] Loss: 1.422958, Accuracy: 44.92
Train Epoch: 1 [29440/50000 (65%)] Loss: 1.648400, Accuracy: 39.06
Train Epoch: 1 [30720/50000 (68%)] Loss: 1.582535, Accuracy: 36.33
Train Epoch: 1 [32000/50000 (71%)] Loss: 1.412092, Accuracy: 43.75
Train Epoch: 1 [33280/50000 (74%)] Loss: 1.487372, Accuracy: 41.80
Train Epoch: 1 [34560/50000 (77%)] Loss: 1.371019, Accuracy: 47.66
Train Epoch: 1 [35840/50000 (80%)] Loss: 1.522084, Accuracy: 41.80
Train Epoch: 1 [37120/50000 (82%)] Loss: 1.427002, Accuracy: 45.70
Train Epoch: 1 [38400/50000 (85%)] Loss: 1.469273, Accuracy: 46.48
Train Epoch: 1 [39680/50000 (88%)] Loss: 1.315190, Accuracy: 50.00
Train Epoch: 1 [40960/50000 (91%)] Loss: 1.464359, Accuracy: 44.53
Train Epoch: 1 [42240/50000 (94%)] Loss: 1.386425, Accuracy: 48.83
Train Epoch: 1 [43520/50000 (97%)] Loss: 1.309160, Accuracy: 51.95
Train Epoch: 1 [35000/50000 (99%)] Loss: 1.430823, Accuracy: 48.00
Validation set: Average loss: 1.5254, Accuracy: 2233/5000 (44.00%)
the time of this epoch:[36.41851210594177 s]
Train Epoch: 2 [0/50000 (0%)] Loss: 1.432113, Accuracy: 45.70
Train Epoch: 2 [1280/50000 (3%)] Loss: 1.384920, Accuracy: 50.39
Train Epoch: 2 [2560/50000 (6%)] Loss: 1.336313, Accuracy: 53.52
Train Epoch: 2 [3840/50000 (9%)] Loss: 1.358443, Accuracy: 51.56
Train Epoch: 2 [5120/50000 (11%)] Loss: 1.355717, Accuracy: 45.31
Train Epoch: 2 [6400/50000 (14%)] Loss: 1.270383, Accuracy: 51.56
Train Epoch: 2 [7680/50000 (17%)] Loss: 1.369348, Accuracy: 51.95
Train Epoch: 2 [8960/50000 (20%)] Loss: 1.317390, Accuracy: 52.73
Train Epoch: 2 [10240/50000 (23%)] Loss: 1.255899, Accuracy: 54.30
Train Epoch: 2 [11520/50000 (26%)] Loss: 1.343368, Accuracy: 47.66
Train Epoch: 2 [12800/50000 (28%)] Loss: 1.291830, Accuracy: 57.42
Train Epoch: 2 [14080/50000 (31%)] Loss: 1.149176, Accuracy: 58.59
Train Epoch: 2 [15360/50000 (34%)] Loss: 1.243597, Accuracy: 53.12
Train Epoch: 2 [16640/50000 (37%)] Loss: 1.186895, Accuracy: 58.98
Train Epoch: 2 [17920/50000 (40%)] Loss: 1.217560, Accuracy: 55.47
Train Epoch: 2 [19200/50000 (43%)] Loss: 1.408130, Accuracy: 44.14
Train Epoch: 2 [20480/50000 (45%)] Loss: 1.339621, Accuracy: 51.56
Train Epoch: 2 [21760/50000 (48%)] Loss: 1.200476, Accuracy: 51.56
Train Epoch: 2 [23040/50000 (51%)] Loss: 1.135905, Accuracy: 57.42
Train Epoch: 2 [24320/50000 (54%)] Loss: 1.026734, Accuracy: 64.06
Train Epoch: 2 [25600/50000 (57%)] Loss: 1.298464, Accuracy: 55.86
Train Epoch: 2 [26880/50000 (60%)] Loss: 1.217263, Accuracy: 57.03
Train Epoch: 2 [28160/50000 (62%)] Loss: 1.132970, Accuracy: 57.81
Train Epoch: 2 [29440/50000 (65%)] Loss: 1.103492, Accuracy: 62.50
Train Epoch: 2 [30720/50000 (68%)] Loss: 1.172130, Accuracy: 56.25
Train Epoch: 2 [32000/50000 (71%)] Loss: 1.124180, Accuracy: 60.94
Train Epoch: 2 [33280/50000 (74%)] Loss: 1.250224, Accuracy: 53.91
Train Epoch: 2 [34560/50000 (77%)] Loss: 1.136258, Accuracy: 62.11
Train Epoch: 2 [35840/50000 (80%)] Loss: 1.031737, Accuracy: 66.41
Train Epoch: 2 [37120/50000 (82%)] Loss: 1.095419, Accuracy: 60.16
Train Epoch: 2 [38400/50000 (85%)] Loss: 1.181455, Accuracy: 58.20
Train Epoch: 2 [39680/50000 (88%)] Loss: 1.045014, Accuracy: 65.62
Train Epoch: 2 [40960/50000 (91%)] Loss: 1.042509, Accuracy: 66.02
Train Epoch: 2 [42240/50000 (94%)] Loss: 1.152499, Accuracy: 59.77
Train Epoch: 2 [43520/50000 (97%)] Loss: 1.146298, Accuracy: 56.25
Train Epoch: 2 [35000/50000 (99%)] Loss: 1.045385, Accuracy: 59.00
Validation set: Average loss: 1.1258, Accuracy: 2961/5000 (59.00%)
the time of this epoch:[36.50461554527283 s]
Train Epoch: 3 [0/50000 (0%)] Loss: 1.012128, Accuracy: 62.50
Train Epoch: 3 [1280/50000 (3%)] Loss: 1.137731, Accuracy: 58.20
Train Epoch: 3 [2560/50000 (6%)] Loss: 0.951903, Accuracy: 64.84
Train Epoch: 3 [3840/50000 (9%)] Loss: 1.073613, Accuracy: 63.28
Train Epoch: 3 [5120/50000 (11%)] Loss: 0.896744, Accuracy: 69.92
Train Epoch: 3 [6400/50000 (14%)] Loss: 0.957336, Accuracy: 64.84
Train Epoch: 3 [7680/50000 (17%)] Loss: 1.031833, Accuracy: 64.84
Train Epoch: 3 [8960/50000 (20%)] Loss: 0.934945, Accuracy: 66.80
Train Epoch: 3 [10240/50000 (23%)] Loss: 1.072113, Accuracy: 62.89
Train Epoch: 3 [11520/50000 (26%)] Loss: 1.147725, Accuracy: 59.38
Train Epoch: 3 [12800/50000 (28%)] Loss: 0.909358, Accuracy: 71.48
###Markdown
Step 3: Test
###Code
test(epoch)
###Output
Test set: Average loss: 0.6902, Accuracy: 8877/10000 (88.77%)
###Markdown
第一次 scale 位于[0,1] ![](http://op4a94iq8.bkt.clouddn.com/18-7-14/70206949.jpg)
###Code
# 查看训练过程的信息
import matplotlib.pyplot as plt
def parse(in_file,flag):
num=-1
ys=list()
xs=list()
losses=list()
with open(in_file,"r") as reader:
for aLine in reader:
#print(aLine)
res=[e for e in aLine.strip('\n').split(" ")]
if res[0]=="Train" and flag=="Train":
num=num+1
ys.append(float(res[-1]))
xs.append(int(num))
losses.append(float(res[-3].split(',')[0]))
if res[0]=="Validation" and flag=="Validation":
num=num+1
xs.append(int(num))
tmp=[float(e) for e in res[-2].split('/')]
ys.append(100*float(tmp[0]/tmp[1]))
losses.append(float(res[-4].split(',')[0]))
plt.figure(1)
plt.plot(xs,ys,'ro')
plt.figure(2)
plt.plot(xs, losses, 'ro')
plt.show()
def main():
in_file="D://INFO.txt"
# 显示训练阶段的正确率和Loss信息
parse(in_file,"Train") # "Validation"
# 显示验证阶段的正确率和Loss信息
#parse(in_file,"Validation") # "Validation"
if __name__=="__main__":
main()
# 查看训练过程的信息
import matplotlib.pyplot as plt
def parse(in_file,flag):
num=-1
ys=list()
xs=list()
losses=list()
with open(in_file,"r") as reader:
for aLine in reader:
#print(aLine)
res=[e for e in aLine.strip('\n').split(" ")]
if res[0]=="Train" and flag=="Train":
num=num+1
ys.append(float(res[-1]))
xs.append(int(num))
losses.append(float(res[-3].split(',')[0]))
if res[0]=="Validation" and flag=="Validation":
num=num+1
xs.append(int(num))
tmp=[float(e) for e in res[-2].split('/')]
ys.append(100*float(tmp[0]/tmp[1]))
losses.append(float(res[-4].split(',')[0]))
plt.figure(1)
plt.plot(xs,ys,'r-')
plt.figure(2)
plt.plot(xs, losses, 'r-')
plt.show()
def main():
in_file="D://INFO.txt"
# 显示训练阶段的正确率和Loss信息
parse(in_file,"Train") # "Validation"
# 显示验证阶段的正确率和Loss信息
parse(in_file,"Validation") # "Validation"
if __name__=="__main__":
main()
###Output
_____no_output_____ |
docs/notebooks/Django_shot_requests_external_user.ipynb | ###Markdown
Get config
###Code
username = config('USERNAME_TEST')
password = config('PASSWORD_TEST')
server_domain = "http://coquma-sim.herokuapp.com/api/"
requested_backend = "fermions"
url=server_domain + requested_backend +"/get_config/"
r = requests.get(url,params={'username': username,'password':password})
print(r.text)
#print(r.content)
###Output
{"conditional": false, "coupling_map": "linear", "dynamic_reprate_enabled": false, "local": false, "memory": true, "open_pulse": false, "display_name": "fermions", "description": "simulator of a fermionic tweezer hardware. The even wires denote the occupations of the spin-up fermions and the odd wires denote the spin-down fermions", "backend_version": "0.0.1", "cold_atom_type": "fermion", "simulator": true, "num_species": 1, "max_shots": 1000000, "max_experiments": 1000, "n_qubits": 1, "supported_instructions": ["load", "measure", "barrier", "fhop", "fint", "fphase"], "wire_order": "interleaved", "backend_name": "synqs_fermions_simulator", "gates": [{"name": "fhop", "qasm_def": "{}", "parameters": ["j_i"], "description": "hopping of atoms to neighboring tweezers", "coupling_map": [[0, 1, 2, 3], [2, 3, 4, 5], [4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7]]}, {"name": "fint", "qasm_def": "{}", "parameters": ["u"], "description": "on-site interaction of atoms of opposite spin state", "coupling_map": [[0, 1, 2, 3, 4, 5, 6, 7]]}, {"name": "fphase", "qasm_def": "{}", "parameters": ["mu_i"], "description": "Applying a local phase to tweezers through an external potential", "coupling_map": [[0, 1], [2, 3], [4, 5], [6, 7], [0, 1, 2, 3, 4, 5, 6, 7]]}], "basis_gates": ["fhop", "fint", "fphase"], "url": "https://coquma-sim.herokuapp.com/api/fermions/"}
###Markdown
Submit jobs
###Code
job_payload = {
'experiment_0': {
'instructions': [
('load', [0], []),
('load', [1], []),
('load', [2], []),
('fhop', [0, 1, 2, 3], [1.0]),
('fint', [0, 1, 2, 3, 4, 5, 6, 7], [2.0]),
('fphase', [0, 2], [2.0]),
('measure', [0], []),
('measure', [1], []),
('measure', [2], []),
('measure', [3], []),
('measure', [4], []),
('measure', [5], []),
('measure', [6], []),
('measure', [7], [])
],
'num_wires': 1,
'shots': 10**2,
'wire_order':'interleaved',
},
'experiment_1': {
'instructions': [
('load', [0], []),
('load', [1], []),
('load', [2], []),
('fhop', [0, 1, 2, 3], [1.0]),
('fint', [0, 1, 2, 3, 4, 5, 6, 7], [2.0]),
('fphase', [0, 2], [2.0]),
('measure', [0], []),
('measure', [1], []),
('measure', [2], []),
('measure', [3], []),
('measure', [4], []),
('measure', [5], []),
('measure', [6], []),
('measure', [7], [])
],
'num_wires': 1,
'shots': 600,
'wire_order':'interleaved',
},
}
# job_payload = {
# 'experiment_0': {'instructions': [('load', [0], [100]), ('load', [1], [20]), ('measure', [0], [])], 'num_wires': 4, 'shots': 5},
# 'experiment_1': {'instructions': [('rLx', [0], [0.1]), ('rLx', [3], [0.3]), ('measure', [0], [])], 'num_wires': 4, 'shots': 5},
# 'experiment_2': {'instructions': [('rLz', [0], [0.15]), ('rLz', [3], [0.2]), ('measure', [0], [])], 'num_wires': 4, 'shots': 5},
# 'experiment_3': {'instructions': [('rLz2', [0], [3.141592653589793]), ('measure', [0], [])], 'num_wires': 4, 'shots': 5},
# 'experiment_4': {'instructions': [('load', [0], [10]), ('LzLz', [0, 1], [0.1]), ('measure', [0], [])], 'num_wires': 4, 'shots': 5},
# 'experiment_5': {'instructions': [('load', [0], [10]), ('LxLy', [0, 1], [0.1]), ('measure', [0], [])], 'num_wires': 4, 'shots': 5},
# }
# job_payload = {
# 'experiment_0': {
# 'instructions': [
# ('load', [0], [50]),
# ('rLx', [0], [1.5707963267948966]),
# ('rLx', [0], [1.5707963267948966]),
# ('rLz', [0], [3.141592653589793]),
# ('measure', [0], [])
# ],
# 'num_wires': 1,
# 'shots': 500},
# 'experiment_1': {
# 'instructions': [
# ('load', [0], [50]),
# ('rLx', [0], [1.5707963267948966]),
# ('rLx', [0], [1.5707963267948966]),
# ('rLz', [0], [3.141592653589793]),
# ('measure', [0], [])
# ],
# 'num_wires': 1,
# 'shots': 500},
# }
url=server_domain + requested_backend +"/post_job/"
job_response = requests.post(url, data={'json':json.dumps(job_payload),'username': username,'password':password})
print(job_response.text)
job_id = (job_response.json())['job_id']
###Output
_____no_output_____
###Markdown
Get job status
###Code
status_payload = {'job_id': job_id}
url=server_domain + requested_backend +"/get_job_status/"
status_response = requests.get(url, params={'json':json.dumps(status_payload),'username': username,'password':password})
print(status_response.text)
###Output
{"job_id": "20220114_175837-fermions-synqs_test-de242", "status": "DONE", "detail": "Got your json.; Passed json sanity check; Compilation done. Shots sent to solver.", "error_message": "None"}
###Markdown
Get job results
###Code
result_payload = {'job_id': job_id}
url=server_domain + requested_backend +"/get_job_result/"
result_response = requests.get(url, params={'json':json.dumps(result_payload),'username': username,'password':password})
#print(result_response.text)
###Output
_____no_output_____
###Markdown
Get user jobs for this backend
###Code
url=server_domain + requested_backend +"/get_user_jobs/"
queue_response = requests.get(url, params={'username': username,'password':password})
#print(queue_response.text)
###Output
_____no_output_____
###Markdown
Get next job in queue
###Code
# url=server_domain + requested_backend +"/get_next_job_in_queue/"
# queue_response = requests.get(url, params={'username': username,'password':password})
# print(queue_response.text)
###Output
_____no_output_____
###Markdown
Change password
###Code
# url="http://localhost:9000/shots/change_password/"
# #job_response = requests.post(url, data={'username': username,'password':password,'new_password':'blah'})
# print(job_response.text)
###Output
_____no_output_____ |
notebooks/focal_loss.ipynb | ###Markdown
Binary focal loss
###Code
class BinaryFocalWithLogitsLoss(nn.Module):
"""Computes the focal loss with logits for binary data.
The Focal Loss is designed to address the one-stage object detection scenario in
which there is an extreme imbalance between foreground and background classes during
training (e.g., 1:1000). Focal loss is defined as:
FL = alpha(1 - p)^gamma * CE(p, y)
where p are the probabilities, after applying the Softmax layer to the logits,
alpha is a balancing parameter, gamma is the focusing parameter, and CE(p, y) is the
cross entropy loss. When gamma=0 and alpha=1 the focal loss equals cross entropy.
See: https://arxiv.org/abs/1708.02002
Arguments:
num_classes (int): number of classes in the classification problem
gamma (float, optional): focusing parameter. Default: 2.
alpha (float, optional): balancing parameter. Default: 0.25.
reduction (string, optional): Specifies the reduction to apply to the output:
'none' | 'mean' | 'sum'. 'none': no reduction will be applied,
'mean': the sum of the output will be divided by the number of
elements in the output, 'sum': the output will be summed. Default: 'mean'
eps (float, optional): small value to avoid division by zero. Default: 1e-6.
"""
def __init__(self, gamma=2, alpha=0.25, reduction="mean"):
super().__init__()
self.gamma = gamma
self.alpha = alpha
if reduction.lower() == "none":
self.reduction_op = None
elif reduction.lower() == "mean":
self.reduction_op = torch.mean
elif reduction.lower() == "sum":
self.reduction_op = torch.sum
else:
raise ValueError(
"expected one of ('none', 'mean', 'sum'), got {}".format(reduction)
)
def forward(self, input, target):
if input.size() != target.size():
raise ValueError(
"size mismatch, {} != {}".format(input.size(), target.size())
)
elif target.unique(sorted=True).tolist() not in [[0, 1], [0], [1]]:
raise ValueError("target values are not binary")
input = input.view(-1)
target = target.view(-1)
# Following the paper: probabilities = probabilities if y=1; otherwise, probabilities = 1-probabilities
probabilities = torch.sigmoid(input)
probabilities = torch.where(target == 1, probabilities, 1 - probabilities)
# Compute the loss
focal = self.alpha * (1 - probabilities).pow(self.gamma)
bce = nn.functional.binary_cross_entropy_with_logits(input, target, reduction="none")
loss = focal * bce
if self.reduction_op is not None:
return self.reduction_op(loss)
else:
return loss
def forward_heng(self, logits, labels):
"""https://www.kaggle.com/c/carvana-image-masking-challenge/discussion/39951"""
probs = torch.sigmoid(logits)
w_pos = torch.pow((1-probs), self.gamma)
w_neg = torch.pow((probs), self.gamma)
weights = (labels==1).float()*w_pos + (labels==0).float()*w_neg
inputs = logits.view (-1)
targets = labels.view(-1)
weights = weights.view (-1)
loss = weights * inputs.clamp(min=0) - weights * inputs * targets + weights * torch.log(1 + torch.exp(-inputs.abs()))
loss = loss.sum() / weights.sum()
return loss
def forward_adrien(self, input, target):
"""https://becominghuman.ai/investigating-focal-and-dice-loss-for-the-kaggle-2018-data-science-bowl-65fb9af4f36c"""
# Inspired by the implementation of binary_cross_entropy_with_logits
if not (target.size() == input.size()):
raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
max_val = (-input).clamp(min=0)
loss = input - input * target + max_val + ((-max_val).exp() + (-input - max_val).exp()).log()
# This formula gives us the log sigmoid of 1-p if y is 0 and of p if y is 1
invprobs = nn.functional.logsigmoid(-input * (target * 2 - 1))
loss = (invprobs * self.gamma).exp() * loss
return loss.mean()
###Output
_____no_output_____
###Markdown
The following losses should match very closely:
###Code
loss = BinaryFocalWithLogitsLoss(alpha=1)
target = torch.Tensor([1])
out = torch.Tensor([2.2])
print("Target:\n", target)
print("Model out:\n", out)
print("BF Loss x100:\n", loss.forward(out, target) * 100)
print("BF Loss Heng x100:\n", loss.forward_heng(out, target) * 100)
print("Loss Adrien x100:\n", loss.forward_adrien(out, target) * 100)
print("BCE Loss:\n", nn.functional.binary_cross_entropy_with_logits(out, target))
target = torch.Tensor([1])
out = torch.Tensor([3.43])
print("Target:\n", target)
print("Model out:\n", out)
print("BF Loss x1000:\n", loss.forward(out, target) * 1000)
print("BF Loss Heng x1000:\n", loss.forward_heng(out, target) * 1000)
print("Loss Adrien x1000:\n", loss.forward_adrien(out, target) * 1000)
print("BCE Loss:\n", nn.functional.binary_cross_entropy_with_logits(out, target))
###Output
Target:
tensor([1.])
Model out:
tensor([3.4300])
BF Loss x1000:
tensor(0.0314)
BF Loss Heng x1000:
tensor(31.8735)
Loss Adrien x1000:
tensor(0.0314)
BCE Loss:
tensor(0.0319)
###Markdown
Some more tests
###Code
target = torch.Tensor([1, 0])
out = torch.Tensor([100, -50])
print("Target:\n", target)
print("Model out:\n", out)
print("BF Loss:\n", loss.forward(out, target))
print("BF Loss Heng:\n", loss.forward_heng(out, target))
print("Loss Adrien:\n", loss.forward_adrien(out, target))
print("BCE Loss:\n", nn.functional.binary_cross_entropy_with_logits(out, target))
target = torch.Tensor([1, 0, 0, 0, 1])
out = torch.Tensor([-5, -2.5, -6, -10, -2])
print("Target:\n", target)
print("Model out:\n", out)
print("BF Loss:\n", loss.forward(out, target))
print("BF Loss Heng:\n", loss.forward_heng(out, target))
print("Loss Adrien:\n", loss.forward_adrien(out, target))
print("BCE Loss:\n", nn.functional.binary_cross_entropy_with_logits(out, target))
target = torch.randint(2, (2, 5, 5))
out = torch.randint(2, (2, 5, 5)) * 3.44
print("Target:\n", target.size())
print("Model out:\n", out.size())
print("BF Loss:\n", loss.forward(out, target))
print("BF Loss Heng:\n", loss.forward_heng(out, target))
print("Loss Adrien:\n", loss.forward_adrien(out, target))
print("BCE Loss:\n", nn.functional.binary_cross_entropy_with_logits(out, target))
target = torch.randint(2, (2, 5, 5)).float()
out = (target * 100) - 50
print("Target:\n", target.size())
print("Model out:\n", out.size())
print("BF Loss:\n", loss.forward(out, target))
print("BF Loss Heng:\n", loss.forward_heng(out, target))
print("Loss Adrien:\n", loss.forward_adrien(out, target))
print("BCE Loss:\n", nn.functional.binary_cross_entropy_with_logits(out, target))
target = torch.randint(2, (5, 2048, 2048))
out = target * 100
%timeit loss.forward(out, target)
###Output
1.4 s ± 254 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Multi-class focal loss
###Code
class FocalWithLogitsLoss(nn.Module):
"""Computes the focal loss with logits.
The Focal Loss is designed to address the one-stage object detection scenario in
which there is an extreme imbalance between foreground and background classes during
training (e.g., 1:1000). Focal loss is defined as:
FL = alpha(1 - p)^gamma * CE(p, y)
where p are the probabilities, after applying the Softmax layer to the logits,
alpha is a balancing parameter, gamma is the focusing parameter, and CE(p, y) is the
cross entropy loss. When gamma=0 and alpha=1 the focal loss equals cross entropy.
See: https://arxiv.org/abs/1708.02002
Arguments:
gamma (float, optional): focusing parameter. Default: 2.
alpha (float, optional): balancing parameter. Default: 0.25.
reduction (string, optional): Specifies the reduction to apply to the output:
'none' | 'mean' | 'sum'. 'none': no reduction will be applied,
'mean': the sum of the output will be divided by the number of
elements in the output, 'sum': the output will be summed. Default: 'mean'
eps (float, optional): small value to avoid division by zero. Default: 1e-6.
"""
def __init__(self, gamma=2, alpha=0.25, reduction="mean"):
super().__init__()
self.gamma = gamma
self.alpha = alpha
if reduction.lower() == "none":
self.reduction_op = None
elif reduction.lower() == "mean":
self.reduction_op = torch.mean
elif reduction.lower() == "sum":
self.reduction_op = torch.sum
else:
raise ValueError(
"expected one of ('none', 'mean', 'sum'), got {}".format(reduction)
)
def forward(self, input, target):
if input.dim() == 4:
input = input.permute(0, 2, 3, 1)
input = input.contiguous().view(-1, input.size(-1))
elif input.dim() != 2:
raise ValueError(
"expected input of size 4 or 2, got {}".format(input.dim())
)
if target.dim() == 3:
target = target.contiguous().view(-1)
elif target.dim() != 1:
raise ValueError(
"expected target of size 3 or 1, got {}".format(target.dim())
)
if target.dim() != input.dim() - 1:
raise ValueError(
"expected target dimension {} for input dimension {}, got {}".format(
input.dim() - 1, input.dim(), target.dim()
)
)
m = input.size(0)
probabilities = nn.functional.softmax(input[range(m), target], dim=0)
focal = self.alpha * (1 - probabilities).pow(self.gamma)
ce = nn.functional.cross_entropy(
input, target, reduction="none"
)
loss = focal * ce
if self.reduction_op is not None:
return self.reduction_op(loss)
else:
return loss
def forward_onehot(self, input, target):
if input.dim() != 2 and input.dim() != 4:
raise ValueError("expected input of size 4 or 2, got {}".format(input.dim()))
if target.dim() != 1 and target.dim() != 3:
raise ValueError("expected target of size 3 or 1, got {}".format(target.dim()))
target_onehot = to_onehot(target, input.size(1))
m = input.size(0)
probabilities = torch.sum(target_onehot * F.softmax(input, dim=0), dim=1)
focal = self.alpha * (1 - probabilities).pow(self.gamma)
ce = F.cross_entropy(input, target, reduction="none")
loss = focal * ce
if self.reduction_op is not None:
return self.reduction_op(loss)
else:
return loss
def to_onehot(tensor, num_classes):
tensor = tensor.unsqueeze(1)
onehot = torch.zeros(tensor.size(0), num_classes, *tensor.size()[2:])
onehot.scatter_(1, tensor, 1)
return onehot
loss = FocalWithLogitsLoss()
target = torch.Tensor([1]).long()
out = torch.Tensor([[-100, 100, -100, -50]]).float()
print("Target:\n", target)
print("Model out:\n", out)
print("Loss:\n", loss.forward(out, target))
print("Onehot loss:\n", loss.forward_onehot(out, target))
target = torch.Tensor([1, 0, 0]).long()
out = torch.Tensor([[-100, 100, -100, -50], [-100, -100, -100, 50], [100, -100, -100, -50]]).float()
print("Target:\n", target)
print("Model out:\n", out)
print("Loss:\n", loss.forward(out, target))
print("Onehot loss:\n", loss.forward_onehot(out, target))
target = torch.randint(3, (2, 5, 5)).long()
out = torch.randint(3, (2, 5, 5)).long()
out = to_onehot(out, 3).float() * 100
print("Target:\n", target.size())
print("Model out:\n", out.size())
print("Loss:\n", loss.forward(out, target))
print("Onehot loss:\n", loss.forward_onehot(out, target))
target = torch.randint(3, (2, 5, 5)).long()
out = to_onehot(target, 3).float() * 100
print("Target:\n", target.size())
print("Model out:\n", out.size())
print("Loss:\n", loss.forward(out, target))
print("Onehot loss:\n", loss.forward_onehot(out, target))
target = torch.randint(3, (5, 2048, 2048)).long()
out = to_onehot(target, 3).float() * 100
%timeit loss.forward(out, target)
%timeit loss.forward_onehot(out, target)
###Output
5.44 s ± 58.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
|
notebooks/interpretability/explain-on-local/simple-feature-transformations-explain-local.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/explain-model/tabular-data/simple-feature-transformations-explain-local.png) Explain binary classification model predictions with raw feature transformations_**This notebook showcases how to use the Azure Machine Learning Interpretability SDK to explain and visualize a binary classification model that uses one to one and one to many feature transformations.**_ Table of Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Run model explainer locally at training time](Explain) 1. Apply feature transformations 1. Train a binary classification model 1. Explain the model on raw features 1. Generate global explanations 1. Generate local explanations1. [Visualize results](Visualize)1. [Next steps](Next%20steps) IntroductionThis notebook illustrates creating explanations for a binary classification model, IBM employee attrition classification, that uses one to one and one to many feature transformations from raw data to engineered features. The one to many feature transformations include one hot encoding on categorical features. The one to one feature transformations apply standard scaling on numeric features. Our tabular data explainer is then used to get raw feature importances.We will showcase raw feature transformations with three tabular data explainers: TabularExplainer (SHAP), MimicExplainer (global surrogate), and PFIExplainer.| ![Interpretability Toolkit Architecture](./img/interpretability-architecture.png) ||:--:|| *Interpretability Toolkit Architecture* |Problem: IBM employee attrition classification with scikit-learn (run model explainer locally)1. Transform raw features to engineered features2. Train a SVC classification model using Scikit-learn3. Run 'explain_model' globally and locally with full dataset in local mode, which doesn't contact any Azure services.4. Visualize the global and local explanations with the visualization dashboard.---Setup: If you are using Jupyter notebooks, the extensions should be installed automatically with the package.If you are using Jupyter Labs run the following command:```(myenv) $ jupyter labextension install @jupyter-widgets/jupyterlab-manager``` Explain Run model explainer locally at training time
###Code
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.svm import SVC
import pandas as pd
import numpy as np
# Explainers:
# 1. SHAP Tabular Explainer
from interpret.ext.blackbox import TabularExplainer
# OR
# 2. Mimic Explainer
from interpret.ext.blackbox import MimicExplainer
# You can use one of the following four interpretable models as a global surrogate to the black box model
from interpret.ext.glassbox import LGBMExplainableModel
from interpret.ext.glassbox import LinearExplainableModel
from interpret.ext.glassbox import SGDExplainableModel
from interpret.ext.glassbox import DecisionTreeExplainableModel
# OR
# 3. PFI Explainer
from interpret.ext.blackbox import PFIExplainer
###Output
_____no_output_____
###Markdown
Load the IBM employee attrition data
###Code
# get the IBM employee attrition dataset
outdirname = 'dataset.6.21.19'
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
import zipfile
zipfilename = outdirname + '.zip'
urlretrieve('https://publictestdatasets.blob.core.windows.net/data/' + zipfilename, zipfilename)
with zipfile.ZipFile(zipfilename, 'r') as unzip:
unzip.extractall('.')
attritionData = pd.read_csv('./WA_Fn-UseC_-HR-Employee-Attrition.csv')
# Dropping Employee count as all values are 1 and hence attrition is independent of this feature
attritionData = attritionData.drop(['EmployeeCount'], axis=1)
# Dropping Employee Number since it is merely an identifier
attritionData = attritionData.drop(['EmployeeNumber'], axis=1)
attritionData = attritionData.drop(['Over18'], axis=1)
# Since all values are 80
attritionData = attritionData.drop(['StandardHours'], axis=1)
# Converting target variables from string to numerical values
target_map = {'Yes': 1, 'No': 0}
attritionData["Attrition_numerical"] = attritionData["Attrition"].apply(lambda x: target_map[x])
target = attritionData["Attrition_numerical"]
attritionXData = attritionData.drop(['Attrition_numerical', 'Attrition'], axis=1)
# Split data into train and test
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(attritionXData,
target,
test_size = 0.2,
random_state=0,
stratify=target)
# Creating dummy columns for each categorical feature
categorical = []
for col, value in attritionXData.iteritems():
if value.dtype == 'object':
categorical.append(col)
# Store the numerical columns in a list numerical
numerical = attritionXData.columns.difference(categorical)
###Output
_____no_output_____
###Markdown
Transform raw features We can explain raw features by either using a `sklearn.compose.ColumnTransformer` or a list of fitted transformer tuples. The cell below uses `sklearn.compose.ColumnTransformer`. In case you want to run the example with the list of fitted transformer tuples, comment the cell below and uncomment the cell that follows after.
###Code
from sklearn.compose import ColumnTransformer
# We create the preprocessing pipelines for both numeric and categorical data.
numeric_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
transformations = ColumnTransformer(
transformers=[
('num', numeric_transformer, numerical),
('cat', categorical_transformer, categorical)])
# Append classifier to preprocessing pipeline.
# Now we have a full prediction pipeline.
clf = Pipeline(steps=[('preprocessor', transformations),
('classifier', SVC(C = 1.0, probability=True))])
'''
# Uncomment below if sklearn-pandas is not installed
#!pip install sklearn-pandas
from sklearn_pandas import DataFrameMapper
# Impute, standardize the numeric features and one-hot encode the categorical features.
numeric_transformations = [([f], Pipeline(steps=[('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler())])) for f in numerical]
categorical_transformations = [([f], OneHotEncoder(handle_unknown='ignore', sparse=False)) for f in categorical]
transformations = numeric_transformations + categorical_transformations
# Append classifier to preprocessing pipeline.
# Now we have a full prediction pipeline.
clf = Pipeline(steps=[('preprocessor', transformations),
('classifier', SVC(C = 1.0, probability=True))])
'''
###Output
_____no_output_____
###Markdown
Train a SVM classification model, which you want to explain
###Code
model = clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Explain predictions on your local machine
###Code
# 1. Using SHAP TabularExplainer
# clf.steps[-1][1] returns the trained classification model
explainer = TabularExplainer(clf.steps[-1][1],
initialization_examples=x_train,
features=attritionXData.columns,
classes=["Not leaving", "leaving"],
transformations=transformations)
# 2. Using MimicExplainer
# augment_data is optional and if true, oversamples the initialization examples to improve surrogate model accuracy to fit original model. Useful for high-dimensional data where the number of rows is less than the number of columns.
# max_num_of_augmentations is optional and defines max number of times we can increase the input data size.
# LGBMExplainableModel can be replaced with LinearExplainableModel, SGDExplainableModel, or DecisionTreeExplainableModel
# explainer = MimicExplainer(clf.steps[-1][1],
# x_train,
# LGBMExplainableModel,
# augment_data=True,
# max_num_of_augmentations=10,
# features=attritionXData.columns,
# classes=["Not leaving", "leaving"],
# transformations=transformations)
# 3. Using PFIExplainer
# Use the parameter "metric" to pass a metric name or function to evaluate the permutation.
# Note that if a metric function is provided a higher value must be better.
# Otherwise, take the negative of the function or set the parameter "is_error_metric" to True.
# Default metrics:
# F1 Score for binary classification, F1 Score with micro average for multiclass classification and
# Mean absolute error for regression
# explainer = PFIExplainer(clf.steps[-1][1],
# features=x_train.columns,
# transformations=transformations,
# classes=["Not leaving", "leaving"])
###Output
_____no_output_____
###Markdown
Generate global explanationsExplain overall model predictions (global explanation)
###Code
# Passing in test dataset for evaluation examples - note it must be a representative sample of the original data
# x_train can be passed as well, but with more examples explanations will take longer although they may be more accurate
global_explanation = explainer.explain_global(x_test)
# Note: if you used the PFIExplainer in the previous step, use the next line of code instead
# global_explanation = explainer.explain_global(x_test, true_labels=y_test)
# Sorted SHAP values
print('ranked global importance values: {}'.format(global_explanation.get_ranked_global_values()))
# Corresponding feature names
print('ranked global importance names: {}'.format(global_explanation.get_ranked_global_names()))
# Feature ranks (based on original order of features)
print('global importance rank: {}'.format(global_explanation.global_importance_rank))
# Note: PFIExplainer does not support per class explanations
# Per class feature names
print('ranked per class feature names: {}'.format(global_explanation.get_ranked_per_class_names()))
# Per class feature importance values
print('ranked per class feature values: {}'.format(global_explanation.get_ranked_per_class_values()))
# Print out a dictionary that holds the sorted feature importance names and values
print('global importance rank: {}'.format(global_explanation.get_feature_importance_dict()))
###Output
_____no_output_____
###Markdown
Explain overall model predictions as a collection of local (instance-level) explanations
###Code
# feature shap values for all features and all data points in the training data
print('local importance values: {}'.format(global_explanation.local_importance_values))
###Output
_____no_output_____
###Markdown
Generate local explanationsExplain local data points (individual instances)
###Code
# Note: PFIExplainer does not support local explanations
# You can pass a specific data point or a group of data points to the explain_local function
# E.g., Explain the first data point in the test set
instance_num = 1
local_explanation = explainer.explain_local(x_test[:instance_num])
# Get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = clf.predict(x_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
print('local importance values: {}'.format(sorted_local_importance_values))
print('local importance names: {}'.format(sorted_local_importance_names))
###Output
_____no_output_____
###Markdown
VisualizeLoad the visualization dashboard
###Code
from azureml.contrib.interpret.visualize import ExplanationDashboard
ExplanationDashboard(global_explanation, model, x_test)
###Output
_____no_output_____ |
resources/adversarial_regression/.ipynb_checkpoints/Nash Adversarial Regression Game-checkpoint.ipynb | ###Markdown
Benchmark for regression
###Code
data = pd.read_csv("data/winequality-white.csv", sep = ";")
data.head()
X = data.loc[:, data.columns != "quality"]
y = data.quality
###Output
_____no_output_____
###Markdown
Extract first few PCA components
###Code
pca = PCA(n_components=11, svd_solver='full')
pca.fit(X)
print(pca.explained_variance_ratio_)
x = pca.fit_transform(X)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3)
x_train = np.asarray(x_train,dtype=np.float32)
y_train = np.asarray(y_train,dtype=np.float32).reshape(-1,1)
x_test = np.asarray(x_test,dtype=np.float32) #un poco trampa
y_test = np.asarray(y_test,dtype=np.float32).reshape(-1,1)
### to torch
x_train = Variable( torch.from_numpy(x_train) )
y_train = Variable( torch.from_numpy(y_train) )
x_test = torch.from_numpy(x_test)
y_test = torch.from_numpy(y_test)
x_train_renom = stand(x_train, x_train.mean(dim=0), x_train.std(dim=0))
x_test_renom = stand(x_test, x_train.mean(dim=0), x_train.std(dim=0))
###Output
_____no_output_____
###Markdown
Using torchConvention: last weight will be the bias.
###Code
w = torch.randn(1, x_train.shape[1] + 1, requires_grad=True)
lmb = 0.0
def model(x, w):
weights = w[0,:-1].view(1,-1)
bias = w[0,-1]
return( x @ weights.t() + bias )
def mse(t1, t2, w):
diff = t1 - t2
return( torch.sum(diff * diff) / diff.numel() + lmb*w @ w.t() )
lr = 0.01
epochs = 1000
for epoch in range(epochs):
epoch += 1
preds = model(x_train_renom, w)
loss = mse(preds, y_train, w)
loss.backward()
with torch.no_grad():
w -= w.grad * lr
w.grad.zero_()
if epoch%1000== 0:
print('epoch {}, loss {}'.format(epoch,loss.data[0]))
print(w)
def rmse(y, pred):
return torch.sqrt( torch.mean( (pred - y )**2 ) )
preds = model(x_test_renom, w)
rmse(y_test, preds)
###Output
_____no_output_____
###Markdown
Adversarial attack. Non-Bayesian case Let $X$ denote the clean dataset, and $X^* = T(X, \beta)$ the attacked dataset, when the classifier choose parameters $\beta$. We try to solve the following Defend-Attack game$$\beta^* = \arg\min_{\beta} \widehat{\theta}_C [\beta, T(X, \beta)] = \arg\min_{\beta} \sum_{i=1}^n \left( T(x, \beta)^{[i]}\beta^{\top} - y_i \right)^2 + \lambda \beta \beta^{\top}$$subject to$$X^* = T(X, \beta) = \arg\min_{X'} \widehat{\theta}_A [\beta, X'] = \arg\min_{X'} \sum_{i=1}^n c_{i}\left( X'^{[i]}\beta^{\top} - z_i \right)^2 + ||X-X'||^2_{F}$$Where $y$ are the true labels, $z$ are the targets and $c$ are instance-specific factors, which are common knowledge here. We can solve exactly the attacker's problem, yielding$$X^* = T(X, \beta) = X - \left(\text{diag}(c_d)^{-1} + \beta \beta^{\top} I_n \right)^{-1} (X\beta - z)\beta^\top$$We could then compute the gradient for the classifier problem using$$\nabla \widehat{\theta}_C [\beta, T(X, \beta)] = \nabla_{\beta} \widehat{\theta}_C [\beta, T(X, \beta)] + \nabla_T \widehat{\theta}_C [\beta, T(X, \beta)] \frac{\partial T(X,\beta)}{\partial \beta}$$and use gradient descent to find $\beta^*$. Defense - Forward mode Attack - Analytic form
###Code
# Exact solution to the attacker problem
def attack(w, instance, c_d, z):
weights = w[0,:-1].view(1,-1)
bias = w[0,-1]
##
p1 = ( 1/c_d + weights @ weights.t() )**(-1)
p1 = torch.diag( p1.squeeze(1) )
p2 = ( instance @ weights.t() - (z - bias) ) @ weights
out = instance - p1 @ p2
return(out)
value = 0.5 ## Same c_i for every instance
c_d = torch.ones([len(y_test), 1])*value
z = torch.zeros([len(y_test),1])
out = attack(w, x_test_renom, c_d, z)
pred_at = model(out, w)
pred_clean = model(x_test_renom, w)
print("Clean test RMSE: ", torch.sqrt( torch.mean( (pred_clean - y_test )**2 ) ) )
print("Attacked est RMSE: ", torch.sqrt( torch.mean( (pred_at- y_test )**2 ) ) )
###Output
Clean test RMSE: tensor(0.7360)
Attacked est RMSE: tensor(0.9263)
###Markdown
Attack - Using torch
###Code
lr = 10e-2
epochs = 100
value = 0.5
#
c_d = torch.ones([len(y_test), 1])*value
z = torch.zeros([len(y_test),1])
#
def attacker_cost_flatten(w, x, x_old, c_d, z):
weights = w[0,:-1].view(1,-1)
bias = w[0,-1]
x = x.view(x_old.shape[0],-1)
##
diff = x_old - x
return torch.sum( c_d*(x @ weights.t() + bias)**2 ) + torch.sum(diff**2)
instance = x_test_renom
out = attack(w, instance, c_d, z)
attacked_instance = torch.randn(x_test_renom.shape[0]*x_test.shape[1], requires_grad=True)
##
for epoch in range(epochs):
epoch += 1
loss = attacker_cost_flatten(w, attacked_instance, instance, c_d, z)
loss.backward()
with torch.no_grad():
attacked_instance -= attacked_instance.grad * lr
attacked_instance.grad.zero_()
if epoch%10 == 0:
print('epoch {}, loss {}'.format(epoch,loss.data[0]))
print(attacker_cost_flatten(w, attacked_instance, instance, c_d, z))
print(attacker_cost_flatten(w, out.view(-1,1), instance, c_d, z))
def learner_cost_flatten(w, x, y, lmb):
x = x.view(-1,w.shape[1]-1)
weights = w[0,:-1].view(1,-1)
bias = w[0,-1]
return torch.sum( (x @ weights.t() + bias - y)**2 ) + lmb * weights @ weights.t()
def attacker_cost_flatten(w, x, x_old, c_d, z):
weights = w[0,:-1].view(1,-1)
bias = w[0,-1]
x = x.view(x_old.shape[0],-1)
##
diff = x_old - x
return torch.sum( c_d*(x @ weights.t() + bias)**2 ) + torch.sum(diff**2)
###Output
_____no_output_____
###Markdown
Defense Forward Mode
###Code
##
def compute_full_second_derivative(vec_func, var):
tmp = torch.zeros( int(np.max(var.shape)), vec_func.shape[0])
for i, loss in enumerate(vec_func):
tmp[:,i] = torch.autograd.grad(loss, var, retain_graph=True)[0]
return tmp
##
def do_forward_multidim(w, x, x_clean, c_d, z, y_train, lmb, T=100):
lr = 10e-6 # Outer learning rate
ilr = 0.01 # Inner learning rate
##
gm = lambda w, x: attacker_cost_flatten(w, x, x_clean, c_d, z)
fm = lambda w, x: learner_cost_flatten(w, x, y_train, lmb)
##
Z = torch.zeros(x.shape[0], w.shape[1])
for i in range(T):
# We nee to compute the total derivative of f wrt x
#y = 0.0
for j in range(T):
grad_x = torch.autograd.grad(gm(w,x), x, create_graph=True)[0]
new_x = x - ilr*grad_x
##
A_tensor = compute_full_second_derivative(new_x, x)
B_tensor = compute_full_second_derivative(new_x, w)
##
Z = A_tensor @ Z + B_tensor.t()
#Z = Z @ A_tensor + B_tensor
x = Variable(new_x, requires_grad=True)
grad_w = torch.autograd.grad(fm(w,x), w, retain_graph=True)[0]
grad_x = torch.autograd.grad(fm(w,x), x)[0]
##
# print(grad_x.shape, Z.shape, grad_w.shape)
w = w - lr*(grad_w + grad_x @ Z)
print(fm(w,x))
return(w)
value = 0.5
c_d = torch.ones([len(y_train), 1])*value
z = torch.zeros([len(y_train),1])
w_clean = torch.randn(1, x_train.shape[1] + 1, requires_grad=True)
instance = x_train_renom
attacked_instance = torch.randn(x_train_renom.shape[0]*x_train_renom.shape[1], requires_grad=True)
w_clean_fw = do_forward_multidim(w_clean, attacked_instance, instance, c_d, z, y_train, lmb=0.0, T=10)
###Output
_____no_output_____
###Markdown
Defense Backward Mode
###Code
def do_backward_multidim(w, x, x_clean, c_d, z, y_train, lmb, T=100):
lr = 10e-6 # Outer learning rate
ilr = 0.01 # Inner learning rate
##
gm = lambda w, x: attacker_cost_flatten(w, x, x_clean, c_d, z)
fm = lambda w, x: learner_cost_flatten(w, x, y_train, lmb)
##
xt = torch.zeros(int(T), x.shape[0])
for i in range(T):
# We nee to compute the total derivative of f wrt x
##
for j in range(T):
grad_x = torch.autograd.grad(gm(w,x), x, create_graph=True)[0]
new_x = x - ilr*grad_x
x = Variable(new_x, requires_grad=True)
xt[j] = x
## CHECK WITH ANALYTICAL SOLUTION
###
alpha = -torch.autograd.grad(fm(w,x), x, retain_graph=True)[0]
gr = torch.zeros_like(w)
###
for j in range(T-1,-1,-1):
x_tmp = Variable(xt[j], requires_grad=True)
grad_x, = torch.autograd.grad( gm(w,x_tmp), x_tmp, create_graph=True )
loss = x_tmp - ilr*grad_x
loss = loss@alpha
aux1 = torch.autograd.grad(loss, w, retain_graph=True)[0]
aux2 = torch.autograd.grad(loss, x_tmp)[0]
gr -= aux1
alpha = aux2
grad_w = torch.autograd.grad(fm(w,x), w)[0]
##
w = w - lr*(grad_w + gr)
if i%10 == 0:
print( 'epoch {}, loss {}'.format(i,fm(w,x)) )
return w
value = 0.5
c_d = torch.ones([len(y_train), 1])*value
z = torch.zeros([len(y_train),1])
w_clean = torch.randn(1, x_train.shape[1] + 1, requires_grad=True)
instance = x_train_renom
attacked_instance = torch.randn(x_train_renom.shape[0]*x_train_renom.shape[1], requires_grad=True)
w_clean_bw = do_backward_multidim(w_clean, attacked_instance, instance, c_d, z, y_train, lmb=0.0, T=450)
w_clean_bw
###Output
_____no_output_____
###Markdown
Test Nash Solution
###Code
value = 0.5
c_d = torch.ones([len(y_test), 1])*value
z = torch.zeros([len(y_test),1])
##
out = attack(w, x_test_renom, c_d, z)
preds = model(out, w_clean)
rmse(y_test, preds)
preds = model(out, w)
rmse(y_test, preds)
###Output
_____no_output_____
###Markdown
Defense Analytical Solution
###Code
w_a = w[0][:-1]
b_a = w[0][-1]
##
w_nash = torch.randn(1, x_train.shape[1], requires_grad=True)
b_nash = torch.randn(1, requires_grad=True)
def attack_a(w, b, test, c_d, z):
c_d = ( 1/c_d + w @ w.t() )**(-1)
p1 = torch.diag( c_d[0] )
#p1 = torch.inverse( torch.inverse( torch.diag(c_d) ) + w @ w.t() * torch.eye( test.shape[0] ) )
p2 = ( test @ w.t() + b - z)@w
out = test - p1 @ p2
return(out)
def learner_cost_a(w, b, x, y, lmb, c_d, z):
out = attack_a(w, b, x, c_d, z)
#out = stand(out, out.mean(dim=0), out.std(dim=0))
#print(out.std(dim=0))
return torch.sum( (out @ w.t() + b - y)**2 ) + lmb * w @ w.t()
lr = 10e-6
epochs = 400
value = 0.5
c_d = torch.ones(len(y_train))*value
z = torch.zeros([len(y_train),1])
print("Initial Cost", learner_cost(w_nash, b_nash, x_train_renom, y_train, lmb, c_d, z))
for epoch in range(epochs):
epoch += 1
loss = learner_cost(w_nash, b_nash, x_train_renom, y_train, lmb, c_d, z)
loss.backward()
with torch.no_grad():
w_nash -= w_nash.grad * lr
b_nash -= b_nash.grad * lr
w_nash.grad.zero_()
b_nash.grad.zero_()
if epoch%100 == 0:
print('epoch {}, loss {}'.format(epoch,loss.data[0]))
print(w_nash)
print(b_nash)
w_clean_bw
###Output
_____no_output_____ |
Platforms/Kaggle/Courses/Intro_to_Deep_Learning/5.Dropout_and_Batch_Normalization/exercise-dropout-and-batch-normalization.ipynb | ###Markdown
**This notebook is an exercise in the [Intro to Deep Learning](https://www.kaggle.com/learn/intro-to-deep-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/dropout-and-batch-normalization).**--- Introduction In this exercise, you'll add dropout to the *Spotify* model from Exercise 4 and see how batch normalization can let you successfully train models on difficult datasets.Run the next cell to get started!
###Code
# Setup plotting
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('animation', html='html5')
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning_intro.ex5 import *
###Output
_____no_output_____
###Markdown
First load the *Spotify* dataset.
###Code
import pandas as pd
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_transformer
from sklearn.model_selection import GroupShuffleSplit
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import callbacks
spotify = pd.read_csv('../input/dl-course-data/spotify.csv')
X = spotify.copy().dropna()
y = X.pop('track_popularity')
artists = X['track_artist']
features_num = ['danceability', 'energy', 'key', 'loudness', 'mode',
'speechiness', 'acousticness', 'instrumentalness',
'liveness', 'valence', 'tempo', 'duration_ms']
features_cat = ['playlist_genre']
preprocessor = make_column_transformer(
(StandardScaler(), features_num),
(OneHotEncoder(), features_cat),
)
def group_split(X, y, group, train_size=0.75):
splitter = GroupShuffleSplit(train_size=train_size)
train, test = next(splitter.split(X, y, groups=group))
return (X.iloc[train], X.iloc[test], y.iloc[train], y.iloc[test])
X_train, X_valid, y_train, y_valid = group_split(X, y, artists)
X_train = preprocessor.fit_transform(X_train)
X_valid = preprocessor.transform(X_valid)
y_train = y_train / 100
y_valid = y_valid / 100
input_shape = [X_train.shape[1]]
print("Input shape: {}".format(input_shape))
###Output
Input shape: [18]
###Markdown
1) Add Dropout to Spotify ModelHere is the last model from Exercise 4. Add two dropout layers, one after the `Dense` layer with 128 units, and one after the `Dense` layer with 64 units. Set the dropout rate on both to `0.3`.
###Code
# YOUR CODE HERE: Add two 30% dropout layers, one after 128 and one after 64
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dropout(0.3),
layers.Dense(64, activation='relu'),
layers.Dropout(0.3),
layers.Dense(1)
])
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
q_1.hint()
q_1.solution()
###Output
_____no_output_____
###Markdown
Now run this next cell to train the model see the effect of adding dropout.
###Code
model.compile(
optimizer='adam',
loss='mae',
)
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=512,
epochs=50,
verbose=0,
)
history_df = pd.DataFrame(history.history)
history_df.loc[:, ['loss', 'val_loss']].plot()
print("Minimum Validation Loss: {:0.4f}".format(history_df['val_loss'].min()))
###Output
Minimum Validation Loss: 0.1959
###Markdown
2) Evaluate DropoutRecall from Exercise 4 that this model tended to overfit the data around epoch 5. Did adding dropout seem to help prevent overfitting this time?
###Code
# View the solution (Run this cell to receive credit!)
q_2.check()
###Output
_____no_output_____
###Markdown
Now, we'll switch topics to explore how batch normalization can fix problems in training.Load the *Concrete* dataset. We won't do any standardization this time. This will make the effect of batch normalization much more apparent.
###Code
import pandas as pd
concrete = pd.read_csv('../input/dl-course-data/concrete.csv')
df = concrete.copy()
df_train = df.sample(frac=0.7, random_state=0)
df_valid = df.drop(df_train.index)
X_train = df_train.drop('CompressiveStrength', axis=1)
X_valid = df_valid.drop('CompressiveStrength', axis=1)
y_train = df_train['CompressiveStrength']
y_valid = df_valid['CompressiveStrength']
input_shape = [X_train.shape[1]]
###Output
_____no_output_____
###Markdown
Run the following cell to train the network on the unstandardized *Concrete* data.
###Code
model = keras.Sequential([
layers.Dense(512, activation='relu', input_shape=input_shape),
layers.Dense(512, activation='relu'),
layers.Dense(512, activation='relu'),
layers.Dense(1),
])
model.compile(
optimizer='sgd', # SGD is more sensitive to differences of scale
loss='mae',
metrics=['mae'],
)
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=64,
epochs=100,
verbose=0,
)
history_df = pd.DataFrame(history.history)
history_df.loc[0:, ['loss', 'val_loss']].plot()
print(("Minimum Validation Loss: {:0.4f}").format(history_df['val_loss'].min()))
###Output
Minimum Validation Loss: nan
###Markdown
Did you end up with a blank graph? Trying to train this network on this dataset will usually fail. Even when it does converge (due to a lucky weight initialization), it tends to converge to a very large number. 3) Add Batch Normalization LayersBatch normalization can help correct problems like this.Add four `BatchNormalization` layers, one before each of the dense layers. (Remember to move the `input_shape` argument to the new first layer.)
###Code
# YOUR CODE HERE: Add a BatchNormalization layer before each Dense layer
model = keras.Sequential([
layers.BatchNormalization(input_shape=input_shape),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(1),
])
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
q_3.hint()
q_3.solution()
###Output
_____no_output_____
###Markdown
Run the next cell to see if batch normalization will let us train the model.
###Code
model.compile(
optimizer='sgd',
loss='mae',
metrics=['mae'],
)
EPOCHS = 100
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=64,
epochs=EPOCHS,
verbose=0,
)
history_df = pd.DataFrame(history.history)
history_df.loc[0:, ['loss', 'val_loss']].plot()
print(("Minimum Validation Loss: {:0.4f}").format(history_df['val_loss'].min()))
###Output
Minimum Validation Loss: 4.0009
###Markdown
4) Evaluate Batch NormalizationDid adding batch normalization help?
###Code
# View the solution (Run this cell to receive credit!)
q_4.check()
###Output
_____no_output_____ |
_notebooks/2020-02-21-introducing-fastpages.ipynb | ###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells)
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- cCOMMENTSomments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#hide
!pip install pandas altair
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
df.columns = [x.replace(' ', '_') for x in df.columns.values]
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=500,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter]- hide: true ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells)
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse_show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse_show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse_show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells)
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#hide
!pip install pandas altair
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
df.columns = [x.replace(' ', '_') for x in df.columns.values]
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=500,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- show_image: true- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#hide
!pip install pandas altair
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
df.columns = [x.replace(' ', '_') for x in df.columns.values]
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=500,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- hide: true - categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")> We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
df.columns = [x.replace(' ', '_') for x in df.columns.values]
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=500,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- published: false- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells)
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse_show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse_show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse_show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells)
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#hide
!pip install pandas altair
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
df.columns = [x.replace(' ', '_') for x in df.columns.values]
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=500,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- description: A page introducing fastpages- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. ![image.png](attachment:image.png) Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
df.columns = [x.replace(' ', '_') for x in df.columns.values]
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=500,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#hide
!pip install pandas altair
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
df.columns = [x.replace(' ', '_') for x in df.columns.values]
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=500,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
df.columns = [x.replace(' ', '_') for x in df.columns.values]
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=500,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- sticky_rank: 1- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides has for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=700,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
df = pd.read_json(movies)
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____
###Markdown
Introducing fastpages> An easy to use blogging platform with extra features for Jupyter Notebooks.- toc: true - badges: true- comments: true- author: Jeremy Howard & Hamel Husain- image: images/diagram.png- categories: [fastpages, jupyter] ![](https://github.com/fastai/fastpages/raw/master/images/diagram.png "https://github.com/fastai/fastpages")We are very pleased to announce the immediate availability of [fastpages](https://github.com/fastai/fastpages). `fastpages` is a platform which allows you to create and host a blog for free, with no ads and many useful features, such as:- Create posts containing code, outputs of code (which can be interactive), formatted text, etc directly from [Jupyter Notebooks](https://jupyter.org/); for instance see this great [example post](https://drscotthawley.github.io/devblog3/2019/02/08/My-1st-NN-Part-3-Multi-Layer-and-Backprop.html) from Scott Hawley. Notebook posts support features such as: - Interactive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive. - Hide or show cell input and output. - Collapsable code cells that are either open or closed by default. - Define the Title, Summary and other metadata via a special markdown cells - Ability to add links to [Colab](https://colab.research.google.com/) and GitHub automatically.- Create posts, including formatting and images, directly from Microsoft Word documents.- Create and edit [Markdown](https://guides.github.com/features/mastering-markdown/) posts entirely online using GitHub's built-in markdown editor.- Embed Twitter cards and YouTube videos.- Categorization of blog posts by user-supplied tags for discoverability.- ... and [much more](https://github.com/fastai/fastpages)[fastpages](https://github.com/fastai/fastpages) relies on Github pages for hosting, and [Github Actions](https://github.com/features/actions) to automate the creation of your blog. The setup takes around three minutes, and does not require any technical knowledge or expertise. Due to built-in automation of fastpages, you don't have to fuss with conversion scripts. All you have to do is save your Jupyter notebook, Word document or markdown file into a specified directory and the rest happens automatically. Infact, this blog post is written in a Jupyter notebook, which you can see with the "View on GitHub" link above.[fast.ai](https://www.fast.ai/) have previously released a similar project called [fast_template](https://www.fast.ai/2020/01/16/fast_template/), which is even easier to set up, but does not support automatic creation of posts from Microsoft Word or Jupyter notebooks, including many of the features outlined above.**Because `fastpages` is more flexible and extensible, we recommend using it where possible.** `fast_template` may be a better option for getting folks blogging who have no technical expertise at all, and will only be creating posts using Github's integrated online editor. Setting Up Fastpages[The setup process](https://github.com/fastai/fastpagessetup-instructions) of fastpages is automated with GitHub Actions, too! Upon creating a repo from the fastpages template, a pull request will automatically be opened (after ~ 30 seconds) configuring your blog so it can start working. The automated pull request will greet you with instructions like this:![Imgur](https://i.imgur.com/JhkIip8.png) All you have to do is follow these instructions (in the PR you receive) and your new blogging site will be up and running! Jupyter Notebooks & FastpagesIn this post, we will cover special features that fastpages provides for Jupyter notebooks. You can also write your blog posts with Word documents or markdown in fastpages, which contain many, but not all the same features. Options via FrontMatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```**All of the above settings are enabled in this post, so you can see what they look like!**- the summary field (preceeded by `>`) will be displayed under your title, and will also be used by social media to display as the description of your page.- `toc`: setting this to `true` will automatically generate a table of contents- `badges`: setting this to `true` will display Google Colab and GitHub links on your blog post.- `comments`: setting this to `true` will enable comments. See [these instructions](https://github.com/fastai/fastpagesenabling-comments) for more details.- `author` this will display the authors names. - `categories` will allow your post to be categorized on a "Tags" page, where readers can browse your post by categories._Markdown front matter is formatted similarly to notebooks. The differences between the two can be [viewed on the fastpages README](https://github.com/fastai/fastpagesfront-matter-related-options)._ Code Folding put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
###Code
#collapse-hide
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
###Code
#collapse-show
cars = 'https://vega.github.io/vega-datasets/data/cars.json'
movies = 'https://vega.github.io/vega-datasets/data/movies.json'
sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'
stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'
flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
###Output
_____no_output_____
###Markdown
If you want to completely hide cells (not just collapse them), [read these instructions](https://github.com/fastai/fastpageshide-inputoutput-cells).
###Code
# hide
df = pd.read_json(movies) # load movies data
df.columns = [x.replace(' ', '_') for x in df.columns.values]
genres = df['Major_Genre'].unique() # get unique field values
genres = list(filter(lambda d: d is not None, genres)) # filter out None values
genres.sort() # sort alphabetically
###Output
_____no_output_____
###Markdown
Interactive Charts With AltairInteractive visualizations made with [Altair](https://altair-viz.github.io/) remain interactive! We leave this below cell unhidden so you can enjoy a preview of syntax highlighting in fastpages, which uses the [Dracula theme](https://draculatheme.com/).
###Code
# select a point for which to provide details-on-demand
label = alt.selection_single(
encodings=['x'], # limit selection to x-axis value
on='mouseover', # select on mouseover events
nearest=True, # select data point nearest the cursor
empty='none' # empty selection includes no data points
)
# define our base line chart of stock prices
base = alt.Chart().mark_line().encode(
alt.X('date:T'),
alt.Y('price:Q', scale=alt.Scale(type='log')),
alt.Color('symbol:N')
)
alt.layer(
base, # base line chart
# add a rule mark to serve as a guide line
alt.Chart().mark_rule(color='#aaa').encode(
x='date:T'
).transform_filter(label),
# add circle marks for selected time points, hide unselected points
base.mark_circle().encode(
opacity=alt.condition(label, alt.value(1), alt.value(0))
).add_selection(label),
# add white stroked text to provide a legible background for labels
base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(
text='price:Q'
).transform_filter(label),
# add text labels for stock prices
base.mark_text(align='left', dx=5, dy=-5).encode(
text='price:Q'
).transform_filter(label),
data=stocks
).properties(
width=500,
height=400
)
###Output
_____no_output_____
###Markdown
Data TablesYou can display tables per the usual way in your blog:
###Code
# display table with pandas
df[['Title', 'Worldwide_Gross',
'Production_Budget', 'IMDB_Rating']].head()
###Output
_____no_output_____ |
simulate_baseline_performance.ipynb | ###Markdown
Disposition outcoming coding In the section below, we transform the SCDB vote and caseDisposition variables into an outcome variable indicating whether the case overall and each Justice has affirmed or reverse. * vote: [http://scdb.wustl.edu/documentation.php?var=votenorms](http://scdb.wustl.edu/documentation.php?var=votenorms) * caseDisposition: [http://scdb.wustl.edu/documentation.php?var=caseDispositionnorms](http://scdb.wustl.edu/documentation.php?var=caseDispositionnorms)
###Code
"""
Setup the outcome map.
Rows correspond to vote types. Columns correspond to disposition types.
Element values correspond to:
* -1: no precedential issued opinion or uncodable, i.e., DIGs
* 0: affirm, i.e., no change in precedent
* 1: reverse, i.e., change in precent
"""
outcome_map = pandas.DataFrame([[-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1],
[-1, 1, 0, 0, 0, 1, 0, -1, -1, -1, -1],
[-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1],
[-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1],
[-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1],
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
[-1, 0, 0, 0, -1, 0, -1, -1, -1, -1, -1]])
outcome_map.columns = range(1, 12)
outcome_map.index = range(1, 9)
def get_outcome(vote, disposition):
"""
Return the outcome code.
"""
if pandas.isnull(vote) or pandas.isnull(disposition):
return -1
return outcome_map.loc[int(vote), int(disposition)]
# Map the case-level disposition outcome
scdb_data.loc[:, "case_outcome_disposition"] = outcome_map.loc[1, scdb_data.loc[:, "caseDisposition"]].values
# Map the justice-level disposition outcome
scdb_data.loc[:, "justice_outcome_disposition"] = scdb_data.loc[:, ("vote", "caseDisposition")] \
.apply(lambda row: get_outcome(row["vote"], row["caseDisposition"]), axis=1)
###Output
_____no_output_____
###Markdown
Running a simulation In the section below, we define methods that handle the execution and analysis of simulations. Simulations are based around the following concepts: * __prediction methods__: prediction methods take historical data and determine, for each term-justice, what prediction to make.
###Code
def predict_court_case_rate(historical_data, justice_list):
"""
Prediction method based on the entire Court case-level historical reversal rate.
:param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered
data known up to point in time
:param justice_list: list of Justices to generate predictions for
:return: dictionary containing a prediction score for reversal for each Justice
"""
# Calculate the rate
counts = historical_data.loc[:, "case_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
# Create return dictionary
prediction_map = dict([(justice, rate) for justice in justice_list])
return prediction_map
def predict_court_justice_rate(historical_data, justice_list):
"""
Prediction method based on the entire Court justice-level historical reversal rate.
:param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered
data known up to point in time
:param justice_list: list of Justices to generate predictions for
:return: dictionary containing a prediction score for reversal for each Justice
"""
# Calculate the rate
counts = historical_data.loc[:, "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
# Create return dictionary
prediction_map = dict([(justice, rate) for justice in justice_list])
return prediction_map
def predict_justice_rate(historical_data, justice_list):
"""
Prediction method based on the per-Justice historical reversal rate.
:param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered
data known up to point in time
:param justice_list: list of Justices to generate predictions for
:return: dictionary containing a prediction score for reversal for each Justice
"""
# Create return dictionary
prediction_map = dict([(justice, numpy.nan) for justice in justice_list])
# Calculate the rate
for justice_id, justice_data in historical_data.groupby('justice'):
# Check justice ID
if justice_id not in justice_list:
continue
# Else, get the rate.
counts = justice_data.loc[:, "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
prediction_map[justice_id] = rate
# In some cases, we have a new Justice without historical data. Fill their value with the overall rate.
counts = historical_data.loc[:, "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
for justice in justice_list:
if pandas.isnull(prediction_map[justice]):
prediction_map[justice] = rate
return prediction_map
def predict_justice_last_rate(historical_data, justice_list, last_terms=1):
"""
Prediction method based on the per-Justice historical reversal rate.
:param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered
data known up to point in time
:param justice_list: list of Justices to generate predictions for
:param last_terms: number of recent terms to use for rate estimate
:return: dictionary containing a prediction score for reversal for each Justice
"""
# Create return dictionary
prediction_map = dict([(justice, numpy.nan) for justice in justice_list])
# Calculate the rate
for justice_id, justice_data in historical_data.groupby('justice'):
# Check justice ID
if justice_id not in justice_list:
continue
# Else, get the rate.
max_term = justice_data["term"].max()
counts = justice_data.loc[justice_data["term"] >= (max_term-last_terms+1), "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
prediction_map[justice_id] = rate
# In some cases, we have a new Justice without historical data. Fill their value with the overall rate.
counts = historical_data.loc[:, "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
for justice in justice_list:
if pandas.isnull(prediction_map[justice]):
prediction_map[justice] = rate
return prediction_map
def run_simulation(simulation_data, term_list, prediction_method, score_method="binary"):
"""
This method defines the simulation driver.
:param simulation_data: SCDB DataFrame to use for simulation; must be a subset of SCDB justice-centered data
:param term_list: list of terms to simulate, e.g., [2000, 2001, 2002]
:param prediction_method: method that takes historical data and indicates, by justice, predictions for term
:param score_method: "binary" or "stratified"; binary maps to score >= 0.5, stratified maps to score <= random
:return: copy of simulation_data with additional columns representing predictions
"""
# Initialize predictions
return_data = simulation_data.copy()
return_data.loc[:, "prediction"] = numpy.nan
return_data.loc[:, "prediction_score"] = numpy.nan
# Iterate over all terms
for term in term_list:
# Get indices for dockets to predict and use for historical data
before_term_index = simulation_data.loc[:, "term"] < term
current_term_index = simulation_data.loc[:, "term"] == term
# Get the list of justices
term_justices = sorted(simulation_data.loc[current_term_index, "justice"].unique().tolist())
# Get the prediction map
prediction_map = prediction_method(simulation_data.loc[before_term_index, :], term_justices)
# Get the predictions
return_data.loc[current_term_index, "prediction_score"] = [prediction_map[j] for j in simulation_data.loc[current_term_index, "justice"].values]
# Support both most_frequent and stratified approaches
if score_method == "binary":
return_data.loc[current_term_index, "prediction"] = (return_data.loc[current_term_index, "prediction_score"] >= 0.5).apply(int)
elif score_method == "stratified":
return_data.loc[current_term_index, "prediction"] = (return_data.loc[current_term_index, "prediction_score"] >= numpy.random.random(return_data.loc[current_term_index].shape[0])).apply(int)
else:
raise NotImplementedError
# Get the return range and return
term_index = (return_data.loc[:, "term"].isin(term_list)) & (return_data.loc[:, "case_outcome_disposition"] >= 0) & (return_data.loc[:, "justice_outcome_disposition"] >= 0)
return return_data.loc[term_index, :]
# Set parameters
start_term = 1953
end_term = 2013
###Output
_____no_output_____
###Markdown
Predicting case outcomes with court reversal rate In the simulation below, we demonstrate the performance of the baseline model to predict case outcome based solely on historical court reversal rate. The results indicate an accuracy of 63.72% with a frequency-weighted F1 score of 49.6%.
###Code
# Run simulation for simplest model
print("predict_court_case_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_court_case_rate)
# Analyze results
print(sklearn.metrics.classification_report(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.confusion_matrix(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.accuracy_score(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.f1_score(output_data["case_outcome_disposition"],
output_data["prediction"]))
# Get accuracy over time and store results
output_data.loc[:, "correct"] = (output_data["case_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1))
output_data.to_csv("results/baseline_court_case_rate.csv", index=False)
court_case_accuracy_by_year = output_data.groupby("term")["correct"].mean()
###Output
predict_court_case_rate
precision recall f1-score support
0.0 0.00 0.00 0.00 19892
1.0 0.68 1.00 0.81 42674
avg / total 0.47 0.68 0.55 62566
[[ 0 19892]
[ 0 42674]]
0.68206374069
0.810984416572
###Markdown
Predicting case outcomes with Justice reversal rate In the simulation below, we demonstrate the performance of the baseline model to predict case outcome based solely on historical Justice reversal rate. The results are identical to the simulation above, and indicate an accuracy of 63.72% with a frequency-weighted F1 score of 49.6%.
###Code
# Run simulation for simplest model
print("predict_court_justice_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_court_justice_rate)
# Analyze results
print(sklearn.metrics.classification_report(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.confusion_matrix(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.accuracy_score(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.f1_score(output_data["case_outcome_disposition"],
output_data["prediction"]))
# Get accuracy over time and store results
output_data.loc[:, "correct"] = (output_data["case_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1))
output_data.to_csv("results/baseline_court_justice_rate.csv", index=False)
court_justice_accuracy_by_year = output_data.groupby("term")["correct"].mean()
###Output
predict_court_justice_rate
precision recall f1-score support
0.0 0.00 0.00 0.00 19892
1.0 0.68 1.00 0.81 42674
avg / total 0.47 0.68 0.55 62566
[[ 0 19892]
[ 0 42674]]
0.68206374069
0.810984416572
###Markdown
Predicting Justice outcomes with Justice reversal rate In the simulation below, we demonstrate the performance of the baseline model to predict Justice outcome based solely on historical Justice reversal rate. The results indicate an accuracy of 57.1% with a frequency-weighted F1 score of 49.7%.
###Code
# Run simulation for simplest model
print("predict_justice_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_justice_rate)
# Analyze results
print(sklearn.metrics.classification_report(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.confusion_matrix(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.accuracy_score(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.f1_score(output_data["justice_outcome_disposition"],
output_data["prediction"]))
# Get accuracy over time and store results
output_data.loc[:, "correct"] = (output_data["justice_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1))
output_data.to_csv("results/baseline_justice_justice_rate.csv", index=False)
justice_accuracy_by_year = output_data.groupby("term")["correct"].mean()
###Output
predict_justice_rate
precision recall f1-score support
0 0.33 0.00 0.00 23187
1 0.63 1.00 0.77 39379
avg / total 0.52 0.63 0.49 62566
[[ 30 23157]
[ 62 39317]]
0.628887894384
0.77203420616
###Markdown
Predicting Justice outcomes with trailing Justice reversal rate In the simulation below, we demonstrate the performance of the baseline model to predict Justice outcome based solely on historical Justice reversal rate over the last one term. The results indicate an accuracy of 56.7% with a frequency-weighted F1 score of 52.0%.
###Code
# Run simulation for simplest model
print("predict_justice_last_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_justice_last_rate)
# Analyze results
print(sklearn.metrics.classification_report(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.confusion_matrix(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.accuracy_score(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.f1_score(output_data["justice_outcome_disposition"],
output_data["prediction"]))
# Get accuracy over time and store results
output_data.loc[:, "correct"] = (output_data["justice_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1))
output_data.to_csv("results/baseline_justice_justice_last_rate.csv", index=False)
justice_last_accuracy_by_year = output_data.groupby("term")["correct"].mean()
###Output
predict_justice_last_rate
precision recall f1-score support
0 0.47 0.10 0.16 23187
1 0.64 0.94 0.76 39379
avg / total 0.58 0.62 0.54 62566
[[ 2226 20961]
[ 2519 36860]]
0.624716299588
0.758436213992
###Markdown
Simulating case votes outcomes with trailing Justice reversal rate In addition to assessing accuracy of Justice predictions, we can simulate case outcomes by simulating voting dynamics based thereon.
###Code
# Run vote simulation
print("predict_justice_last_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_justice_last_rate)
output_data.loc[:, "case_prediction"] = numpy.nan
# Iterate over all dockets
for docket_id, docket_data in output_data.groupby('docketId'):
# Count predictions from docket
output_data.loc[:, "case_prediction"] = int(docket_data.loc[:, "prediction"].value_counts().idxmax())
# Output case level predictionsn
output_data.to_csv("results/baseline_case_justice_last_rate.csv", index=False)
print(sklearn.metrics.classification_report(output_data["case_outcome_disposition"].fillna(-1),
output_data["case_prediction"].fillna(-1)))
print(sklearn.metrics.confusion_matrix(output_data["case_outcome_disposition"].fillna(-1),
output_data["case_prediction"].fillna(-1)))
print(sklearn.metrics.accuracy_score(output_data["case_outcome_disposition"].fillna(-1),
output_data["case_prediction"].fillna(-1)))
print(sklearn.metrics.f1_score(output_data["case_outcome_disposition"].fillna(-1),
output_data["case_prediction"].fillna(-1)))
# Plot all accuracies
f = plt.figure(figsize=(10, 8))
plt.plot(court_case_accuracy_by_year.index, court_case_accuracy_by_year,
marker='o', alpha=0.75)
plt.plot(court_justice_accuracy_by_year.index, court_justice_accuracy_by_year,
marker='o', alpha=0.75)
plt.plot(justice_accuracy_by_year.index, justice_accuracy_by_year,
marker='o', alpha=0.75)
plt.plot(justice_last_accuracy_by_year.index, justice_last_accuracy_by_year,
marker='o', alpha=0.75)
plt.title("Accuracy by term and model", size=24)
plt.xlabel("Term")
plt.ylabel("% correct")
plt.legend(("Court by case disposition", "Court by Justice disposition",
"Justice by justice disposition", "Justice by trailing justice disposition"))
# Plot case disposition rate by term
rate_data = scdb_data.groupby("term")["case_outcome_disposition"].value_counts(normalize=True, sort=True).unstack()
f = plt.figure(figsize=(10, 8))
plt.plot(rate_data.index, rate_data, marker="o", alpha=0.75)
plt.title("Outcome rates by year", size=24)
plt.xlabel("Term")
plt.ylabel("Rate (% of outcomes/term)")
plt.legend(("NA", "Affirm", "Reverse"))
###Output
_____no_output_____ |
examples/segmentation-example.ipynb | ###Markdown
Segmentation If you have Unet, all CV is segmentation now. Goals- train Unet on isbi dataset- visualize the predictions Preparation
###Code
# Get the data:
! wget -P ./data/ https://www.dropbox.com/s/0rvuae4mj6jn922/isbi.tar.gz
! tar -xf ./data/isbi.tar.gz -C ./data/
###Output
_____no_output_____
###Markdown
Final folder structure with training data:```bashcatalyst-examples/ data/ isbi/ train-volume.tif train-labels.tif```
###Code
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
###Output
_____no_output_____
###Markdown
Data
###Code
# ! pip install tifffile
import tifffile as tiff
images = tiff.imread('./data/isbi/train-volume.tif')
masks = tiff.imread('./data/isbi/train-labels.tif')
data = list(zip(images, masks))
train_data = data[:-4]
valid_data = data[-4:]
import collections
import numpy as np
import torch
import torchvision
import torchvision.transforms as transforms
from catalyst.data import Augmentor
from catalyst.dl import utils
bs = 4
num_workers = 4
data_transform = transforms.Compose([
Augmentor(
dict_key="features",
augment_fn=lambda x: \
torch.from_numpy(x.copy().astype(np.float32) / 255.).unsqueeze_(0)),
Augmentor(
dict_key="features",
augment_fn=transforms.Normalize(
(0.5, ),
(0.5, ))),
Augmentor(
dict_key="targets",
augment_fn=lambda x: \
torch.from_numpy(x.copy().astype(np.float32) / 255.).unsqueeze_(0))
])
open_fn = lambda x: {"features": x[0], "targets": x[1]}
loaders = collections.OrderedDict()
train_loader = utils.get_loader(
train_data,
open_fn=open_fn,
dict_transform=data_transform,
batch_size=bs,
num_workers=num_workers,
shuffle=True)
valid_loader = utils.get_loader(
valid_data,
open_fn=open_fn,
dict_transform=data_transform,
batch_size=bs,
num_workers=num_workers,
shuffle=False)
loaders["train"] = train_loader
loaders["valid"] = valid_loader
###Output
_____no_output_____
###Markdown
Model
###Code
from catalyst.contrib.models.segmentation import Unet
###Output
_____no_output_____
###Markdown
Train
###Code
import torch
import torch.nn as nn
from catalyst.dl.runner import SupervisedRunner
# experiment setup
num_epochs = 50
logdir = "./logs/segmentation_notebook"
# model, criterion, optimizer
model = Unet(num_classes=1, in_channels=1, num_channels=64, num_blocks=4)
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[10, 20, 40], gamma=0.3)
# model runner
runner = SupervisedRunner()
# model training
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
loaders=loaders,
logdir=logdir,
num_epochs=num_epochs,
verbose=True
)
###Output
_____no_output_____
###Markdown
Inference
###Code
from catalyst.dl.callbacks import InferCallback, CheckpointCallback
loaders = collections.OrderedDict([("infer", loaders["valid"])])
runner.infer(
model=model,
loaders=loaders,
callbacks=[
CheckpointCallback(
resume=f"{logdir}/checkpoints/best.pth"),
InferCallback()
],
)
###Output
_____no_output_____
###Markdown
Predictions visualization
###Code
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlib inline
sigmoid = lambda x: 1/(1 + np.exp(-x))
for i, (input, output) in enumerate(zip(
valid_data, runner.callbacks[1].predictions["logits"])):
image, mask = input
threshold = 0.5
plt.figure(figsize=(10,8))
plt.subplot(1, 3, 1)
plt.imshow(image, 'gray')
plt.subplot(1, 3, 2)
output = sigmoid(output[0].copy())
output = (output > threshold).astype(np.uint8)
plt.imshow(output, 'gray')
plt.subplot(1, 3, 3)
plt.imshow(mask, 'gray')
plt.show()
###Output
_____no_output_____
###Markdown
Segmentation If you have Unet, all CV is segmentation now. Goals- train Unet on isbi dataset- visualize the predictions Preparation
###Code
# Get the data:
! wget -P ./data/ https://www.dropbox.com/s/0rvuae4mj6jn922/isbi.tar.gz
! tar -xf ./data/isbi.tar.gz -C ./data/
###Output
_____no_output_____
###Markdown
Final folder structure with training data:```bashcatalyst-examples/ data/ isbi/ train-volume.tif train-labels.tif```
###Code
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
###Output
_____no_output_____
###Markdown
Data
###Code
# ! pip install tifffile
import tifffile as tiff
images = tiff.imread('./data/isbi/train-volume.tif')
masks = tiff.imread('./data/isbi/train-labels.tif')
data = list(zip(images, masks))
train_data = data[:-4]
valid_data = data[-4:]
import collections
import numpy as np
import torch
import torchvision
import torchvision.transforms as transforms
from catalyst.data.augmentor import Augmentor
from catalyst.dl.utils import UtilsFactory
bs = 4
num_workers = 4
data_transform = transforms.Compose([
Augmentor(
dict_key="features",
augment_fn=lambda x: \
torch.from_numpy(x.copy().astype(np.float32) / 255.).unsqueeze_(0)),
Augmentor(
dict_key="features",
augment_fn=transforms.Normalize(
(0.5, ),
(0.5, ))),
Augmentor(
dict_key="targets",
augment_fn=lambda x: \
torch.from_numpy(x.copy().astype(np.float32) / 255.).unsqueeze_(0))
])
open_fn = lambda x: {"features": x[0], "targets": x[1]}
loaders = collections.OrderedDict()
train_loader = UtilsFactory.create_loader(
train_data,
open_fn=open_fn,
dict_transform=data_transform,
batch_size=bs,
num_workers=num_workers,
shuffle=True)
valid_loader = UtilsFactory.create_loader(
valid_data,
open_fn=open_fn,
dict_transform=data_transform,
batch_size=bs,
num_workers=num_workers,
shuffle=False)
loaders["train"] = train_loader
loaders["valid"] = valid_loader
###Output
_____no_output_____
###Markdown
Model
###Code
from catalyst.contrib.models.segmentation import UNet
###Output
_____no_output_____
###Markdown
Train
###Code
import torch
import torch.nn as nn
from catalyst.dl.experiments import SupervisedRunner
# experiment setup
num_epochs = 10
logdir = "./logs/segmentation_notebook"
# model, criterion, optimizer
model = UNet(num_classes=1, in_channels=1, num_filters=64, num_blocks=4)
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[10, 20, 40], gamma=0.3)
# model runner
runner = SupervisedRunner()
# model training
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
loaders=loaders,
logdir=logdir,
num_epochs=num_epochs,
verbose=True
)
###Output
_____no_output_____
###Markdown
Inference
###Code
from catalyst.dl.callbacks import InferCallback, CheckpointCallback
loaders = collections.OrderedDict([("infer", loaders["valid"])])
runner.infer(
model=model,
loaders=loaders,
callbacks=[
CheckpointCallback(
resume=f"{logdir}/checkpoints/best.pth"),
InferCallback()
],
)
###Output
_____no_output_____
###Markdown
Predictions visualization
###Code
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlib inline
sigmoid = lambda x: 1/(1 + np.exp(-x))
for i, (input, output) in enumerate(zip(
valid_data, runner.callbacks[1].predictions["logits"])):
image, mask = input
threshold = 0.5
plt.figure(figsize=(10,8))
plt.subplot(1, 3, 1)
plt.imshow(image, 'gray')
plt.subplot(1, 3, 2)
output = sigmoid(output[0].copy())
output = (output > threshold).astype(np.uint8)
plt.imshow(output, 'gray')
plt.subplot(1, 3, 3)
plt.imshow(mask, 'gray')
plt.show()
###Output
_____no_output_____
###Markdown
Segmentation If you have Unet, all CV is segmentation now. Goals- train Unet on isbi dataset- visualize the predictions Preparation Get the [data](https://www.dropbox.com/s/0rvuae4mj6jn922/isbi.tar.gz) and unpack it to `catalyst-examples/data` folder:```bashcatalyst-examples/ data/ isbi/ train-volume.tif train-labels.tif``` Data
###Code
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# ! pip install tifffile
import tifffile as tiff
images = tiff.imread('./data/isbi/train-volume.tif')
masks = tiff.imread('./data/isbi/train-labels.tif')
data = list(zip(images, masks))
train_data = data[:-4]
valid_data = data[-4:]
import collections
import numpy as np
import torch
import torchvision
import torchvision.transforms as transforms
from catalyst.data.augmentor import Augmentor
from catalyst.dl.utils import UtilsFactory
bs = 4
n_workers = 4
data_transform = transforms.Compose([
Augmentor(
dict_key="features",
augment_fn=lambda x: \
torch.from_numpy(x.copy().astype(np.float32) / 255.).unsqueeze_(0)),
Augmentor(
dict_key="features",
augment_fn=transforms.Normalize(
(0.5, 0.5, 0.5),
(0.5, 0.5, 0.5))),
Augmentor(
dict_key="targets",
augment_fn=lambda x: \
torch.from_numpy(x.copy().astype(np.float32) / 255.).unsqueeze_(0))
])
open_fn = lambda x: {"features": x[0], "targets": x[1]}
loaders = collections.OrderedDict()
train_loader = UtilsFactory.create_loader(
train_data,
open_fn=open_fn,
dict_transform=data_transform,
batch_size=bs,
workers=n_workers,
shuffle=True)
valid_loader = UtilsFactory.create_loader(
valid_data,
open_fn=open_fn,
dict_transform=data_transform,
batch_size=bs,
workers=n_workers,
shuffle=False)
loaders["train"] = train_loader
loaders["valid"] = valid_loader
###Output
_____no_output_____
###Markdown
Model
###Code
from catalyst.contrib.models.segmentation import UNet
###Output
_____no_output_____
###Markdown
Model, criterion, optimizer
###Code
import torch
import torch.nn as nn
model = UNet(num_classes=1, in_channels=1, num_filters=64, num_blocks=4)
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
# scheduler = None # for OneCycle usage
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[10, 20, 40], gamma=0.3)
# scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.5, patience=2, verbose=True)
###Output
_____no_output_____
###Markdown
Callbacks
###Code
import collections
from catalyst.dl.callbacks import (
LossCallback,
Logger, TensorboardLogger,
OptimizerCallback, SchedulerCallback, CheckpointCallback,
PrecisionCallback, OneCycleLR)
n_epochs = 50
logdir = "./logs/segmentation_notebook"
callbacks = collections.OrderedDict()
callbacks["loss"] = LossCallback()
callbacks["optimizer"] = OptimizerCallback()
# OneCylce custom scheduler callback
callbacks["scheduler"] = OneCycleLR(
cycle_len=n_epochs,
div=3, cut_div=4, momentum_range=(0.95, 0.85))
# Pytorch scheduler callback
# callbacks["scheduler"] = SchedulerCallback(
# reduce_metric="loss_main")
callbacks["saver"] = CheckpointCallback()
callbacks["logger"] = Logger()
callbacks["tflogger"] = TensorboardLogger()
###Output
_____no_output_____
###Markdown
Train
###Code
from catalyst.dl.runner import SupervisedModelRunner
runner = SupervisedModelRunner(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler)
runner.train(
loaders=loaders,
callbacks=callbacks,
logdir=logdir,
epochs=n_epochs, verbose=True)
###Output
_____no_output_____
###Markdown
Inference
###Code
from catalyst.dl.callbacks import InferCallback
callbacks = collections.OrderedDict()
callbacks["saver"] = CheckpointCallback(
resume=f"{logdir}/checkpoint.best.pth.tar")
callbacks["infer"] = InferCallback()
loaders = collections.OrderedDict()
loaders["infer"] = UtilsFactory.create_loader(
valid_data,
open_fn=open_fn,
dict_transform=data_transform,
batch_size=bs,
workers=n_workers,
shuffle=False)
runner.infer(
loaders=loaders,
callbacks=callbacks,
verbose=True)
###Output
_____no_output_____
###Markdown
Predictions visualization
###Code
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlib inline
sigmoid = lambda x: 1/(1 + np.exp(-x))
for i, (input, output) in enumerate(zip(
valid_data, callbacks["infer"].predictions["logits"])):
image, mask = input
threshold = 0.5
plt.figure(figsize=(10,8))
plt.subplot(1, 3, 1)
plt.imshow(image, 'gray')
plt.subplot(1, 3, 2)
output = sigmoid(output[0].copy())
output = (output > threshold).astype(np.uint8)
plt.imshow(output, 'gray')
plt.subplot(1, 3, 3)
plt.imshow(mask, 'gray')
plt.show()
###Output
_____no_output_____
###Markdown
Segmentation If you have Unet, all CV is segmentation now. Goals- train Unet on isbi dataset- visualize the predictions Preparation Get the [data](https://www.dropbox.com/s/0rvuae4mj6jn922/isbi.tar.gz) and unpack it to `catalyst-examples/data` folder:```bashcatalyst-examples/ data/ isbi/ train-volume.tif train-labels.tif```
###Code
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
###Output
_____no_output_____
###Markdown
Data
###Code
# ! pip install tifffile
import tifffile as tiff
images = tiff.imread('./data/isbi/train-volume.tif')
masks = tiff.imread('./data/isbi/train-labels.tif')
data = list(zip(images, masks))
train_data = data[:-4]
valid_data = data[-4:]
import collections
import numpy as np
import torch
import torchvision
import torchvision.transforms as transforms
from catalyst.data.augmentor import Augmentor
from catalyst.dl.utils import UtilsFactory
bs = 4
num_workers = 4
data_transform = transforms.Compose([
Augmentor(
dict_key="features",
augment_fn=lambda x: \
torch.from_numpy(x.copy().astype(np.float32) / 255.).unsqueeze_(0)),
Augmentor(
dict_key="features",
augment_fn=transforms.Normalize(
(0.5, 0.5, 0.5),
(0.5, 0.5, 0.5))),
Augmentor(
dict_key="targets",
augment_fn=lambda x: \
torch.from_numpy(x.copy().astype(np.float32) / 255.).unsqueeze_(0))
])
open_fn = lambda x: {"features": x[0], "targets": x[1]}
loaders = collections.OrderedDict()
train_loader = UtilsFactory.create_loader(
train_data,
open_fn=open_fn,
dict_transform=data_transform,
batch_size=bs,
num_workers=num_workers,
shuffle=True)
valid_loader = UtilsFactory.create_loader(
valid_data,
open_fn=open_fn,
dict_transform=data_transform,
batch_size=bs,
num_workers=num_workers,
shuffle=False)
loaders["train"] = train_loader
loaders["valid"] = valid_loader
###Output
_____no_output_____
###Markdown
Model
###Code
from catalyst.contrib.models.segmentation import UNet
###Output
_____no_output_____
###Markdown
Train
###Code
import torch
import torch.nn as nn
from catalyst.dl.experiments import SupervisedRunner
# experiment setup
num_epochs = 10
logdir = "./logs/segmentation_notebook"
# model, criterion, optimizer
model = UNet(num_classes=1, in_channels=1, num_filters=64, num_blocks=4)
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[10, 20, 40], gamma=0.3)
# model runner
runner = SupervisedRunner()
# model training
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
loaders=loaders,
logdir=logdir,
num_epochs=num_epochs,
verbose=True
)
###Output
_____no_output_____
###Markdown
Inference
###Code
from catalyst.dl.callbacks import InferCallback, CheckpointCallback
loaders = collections.OrderedDict([("infer", loaders["valid"])])
runner.infer(
model=model,
loaders=loaders,
callbacks=[
CheckpointCallback(
resume=f"{logdir}/checkpoints/best.pth"),
InferCallback()
],
)
###Output
_____no_output_____
###Markdown
Predictions visualization
###Code
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlib inline
sigmoid = lambda x: 1/(1 + np.exp(-x))
for i, (input, output) in enumerate(zip(
valid_data, runner.callbacks[1].predictions["logits"])):
image, mask = input
threshold = 0.5
plt.figure(figsize=(10,8))
plt.subplot(1, 3, 1)
plt.imshow(image, 'gray')
plt.subplot(1, 3, 2)
output = sigmoid(output[0].copy())
output = (output > threshold).astype(np.uint8)
plt.imshow(output, 'gray')
plt.subplot(1, 3, 3)
plt.imshow(mask, 'gray')
plt.show()
###Output
_____no_output_____ |
es1/Exercise1.ipynb | ###Markdown
Exercise 1.1 Task Test the Pseudo-Random Number generator downloaded from the NSL Ariel web site by estimating the integrals:- $\langle r \rangle = \int_0^1 r dr = 1/2$- $\sigma^2 = \int_0^1 (r-1/2)^2 dr = 1/12$ and dividing the interval $[0,1]$ into $M$ identical sub-intervals to implement the $\chi^2$ test. Solution This exercise consists of a central script (main.cpp) and a pseudo-random number generator library (random.hpp). The idea behind the methodology is to run two for loops: the external one over N blocks and the inner one over L numbers, where the random numbers are summed according to the integrand equation. Then the averages for each block is calculated and stored together with the square of the mean value. Later, the datablocking method is computed, by calculating the progressive mean, the squared progressive mean and the associated error. The error on the mean was calculated following this formula: $ \sigma_A^2 = \Big\langle (A - \langle A\rangle)^2\Big\rangle = \langle A^2 \rangle - \langle A \rangle^2 $.The central limit theorem states that the error on the mean value follows the law $\frac{1}{N}$, so that it gets smaller with increasing N. ParametersThe simulation uses two parameters:- **Number of runs** which indicates how many times the simulation runs (defaults to 10000)- **Number of blocks** which is the number of blocks in which the runs are split into (defaults to 100)
###Code
base_dir = "es1.1/es1.1.1/"
filename = base_dir+"config.ini"
config = configparser.ConfigParser()
config.read(filename)
M=int(config["simulation"]["runs"])
N=int(config["simulation"]["blocks"])
logger_debug = bool(config["settings"]["logger_debug"])
if M%N != 0:
raise ValueError(f"Number of blocks not a factor of number of runs. {M} runs -- {N} blocks")
L=int(M/N)
print(f"Ex1.1.1: Configuration file '{filename}' successfully parsed")
x = np.arange(N)*L
_sum,_error = zip(*[
(lines.split('\t')[1],lines.split('\t')[2] ) for lines in open(base_dir+config['settings']['input_file']).readlines()])
_sum,_error = np.asfarray(_sum),np.asfarray(_error)
avg = [mean(_sum-0.5) for i in range(len(x))]
_mean = mean(_sum-0.5)
_mean = float("{:.4f}".format(_mean))
if x.shape == _sum.shape and _sum.shape == _error.shape and logger_debug:
print("Ex1.1.1: Dimensional checks passed.")
print("Average of intergral without datablocking\n")
y_f = np.loadtxt(base_dir+"/outputs/temp.dat")
x_f = [i for i in range(len(y_f))]
mean_y_f = [mean(y_f) for i in range(len(y_f))]
plt.title(f"Integral value without datablocking")
plt.plot(x_f, y_f,label="Data")
plt.plot(x_f,mean_y_f,label="Mean")
plt.xlabel("Block")
plt.ylabel("<r>")
plt.grid(True)
plt.legend()
plt.show()
print("")
print("Ex1.1.1: Graph successfully plotted")
print("Data average: ",mean_y_f[0])
print("Expected value: ",0.5)
print("Uncertainty: ",mean_y_f[0]-0.5)
print("Average of integral with datablocking\n")
try:
plt.title(f"Integral value with {M} runs and {N} blocks")
plt.errorbar(x,_sum-0.5,yerr=_error,label="Experimental Data")
plt.plot(x,[_sum[-1]-0.5 for i in range(len(x))],color="orange",label="Final value",linewidth=2)
plt.plot(x,[0 for i in range(len(x))],label="Expected value",linewidth=2)
plt.xlabel('run')
plt.ylabel('<r>-1/2')
plt.grid(True)
plt.legend()
plt.show()
print("")
print(f"Final value after all blocks: {_sum[-1]-0.5}")
print("Expected value: ",0.0)
print("Uncertainty: ",_mean-0.0)
print("Ex1.1.1: Graph successfully plotted\n\n")
except ValueError as e:
print("Ex1.1.1: Cannot execute error graph:\n- Possible shape mismatch.\n- Forgot to call make\n- Number of blocks not a factor\n\n")
base_dir = "es1.1/es1.1.2/"
filename = base_dir+"config.ini"
config = configparser.ConfigParser()
config.read(filename)
print(f"Ex1.1.2: Configuration file '{filename}' successfully parsed")
M=int(config["simulation"]["runs"])
N=int(config["simulation"]["blocks"])
if M%N != 0:
raise ValueError(f"Number of blocks not a factor of number of runs. {M} - {N}")
L=int(M/N)
x = np.arange(N)*L
_sum,_error = zip(*[
(lines.split('\t')[1],lines.split('\t')[2] ) for lines in open(base_dir+config['settings']['input_file']).readlines()])
_sum,_error = np.asfarray(_sum),np.asfarray(_error)
avg = [mean(_sum-1./12) for i in range(len(x))]
if x.shape == _sum.shape and _sum.shape == _error.shape:
print("Ex1.1.2: Dimensional checks passed.")
plt.title(f"Integral value with {M} runs and {N} blocks")
plt.errorbar(x,_sum-1/12,yerr=_error,label="Experimental Data")
plt.plot(x,[_sum[-1]-1/12 for i in range(len(x))],color="orange",label="Final value",linewidth=2)
plt.plot(x,[0 for i in range(len(x))],label="Expected value",linewidth=2)
plt.xlabel('# Runs')
plt.ylabel('<r>-1/12')
plt.grid(True)
plt.legend()
plt.show()
################## ---- CHI SQUARED ---- ##################
base_dir = "es1.1/es1.1.3/"
filename = base_dir+"config.ini"
config = configparser.ConfigParser()
config.read(filename)
print(f"Ex1.1.3: Configuration file '{filename}' successfully parsed")
M = int(config["simulation"]["blocks"])
N = int(config["simulation"]["numbers"])
chi2 = [float(line.split("\t")[1]) for line in open(base_dir+config['settings']['input_file']).readlines()]
x = [i for i in range(M)]
avg = [mean(chi2) for i in range(len(x))]
plt.title(f"Chi-squared test with {N} numbers and {M} blocks")
plt.errorbar(x,chi2,label="Data")
plt.plot(x,avg,label="mean",linewidth=3,color="orange")
plt.xlabel('# Runs')
plt.ylabel('chi^2')
plt.grid(True)
plt.legend()
plt.show()
_mean = mean(chi2)
diff = abs(int(N/M)-mean(chi2))
print("Mean: ",_mean,"\t\tExpected: ",N/M,"\t\tDifference: ","{:.4f}".format(diff))
###Output
Ex1.1.3: Configuration file 'es1.1/es1.1.3/config.ini' successfully parsed
###Markdown
Results As expected, the accuracy of the simulation improves with the number of Monte Carlo runs. A larger number of blocks gives more points to the graph but a slightly lower accuracy, because the average for each block is calculated with less points.The following graph shows the estimate of the integral subtracted by the expected value (in blue) against the number of runs. The overall average of the data is also plotted (orange).The fact that the accuracy improves with the number of tries, and that the calculated value stabilizes quickly proves the validity of the pseudo-random number generator. In fact, a non-functional generator would not exhibit these properties, but would rather compute a divergent value for the integral, or make predictions to a wrong number. The fact that the sequence converges to zero with a relatively small error shows that the calculated value is correct and the central limit theorem is valid.The chi-squared is correctly fluctuating around the expected value of N/M (100). However, the accuracy of the values does not improve with time. This is because the module does not generate pure, random number, but pseudo-random numbers. These are produced according to a precise algorithm that uses a initializing seed and the modulo operation, making it look like the numbers are randomly generated. Exercise 1.2 Task - Extend Pseudo-Random Number generator downloaded from the NSL Ariel web site and check the Central Limit Theorem:1. Add two probability distributions by using the **method of the inversion of the cumulative distribution** to sample from a **generic** exponential distribution, $p(x) = \lambda \exp(-\lambda x)$, $x\in [0;+\infty]$ (see this Wikipedia link), and a **generic** Cauchy-Lorentz distribution $p(x)=\frac{1}{\pi}\frac{\Gamma}{(x-\mu)^2+\Gamma^2}$, $x\in [-\infty;+\infty]$ (see this Wikipedia link).2. Make 3 pictures with the histograms obtained filling them with $10^4$ realizations of $S_N = \frac{1}{N}\sum_{i=1}^N x_i$ (for $N=1, 2, 10, 100$), being $x_i$ a random variable sampled throwing a *standard* dice (fig.1), an *exponential* dice (fig.2, use $\lambda=1$) and a *Lorentzian* dice (fig.3, use $\mu=0$ and $\Gamma=1$).Note that you can try to fit the case $N=100$ with a Gaussian for standard and exponential dices, whereas you should use a Cauchy-Lorentz distribution for the last case. Solution The Random class has been enriched with two additional probability distributions: Exp($\lambda$) and Lorentz($\mu$,$\Gamma$). In both cases, the number y given by the distribution $p_y(y)$ is obtained by a pseudo-random number uniformly generated inside $[0,1]$ and returned using the respective inverted cumulative function.The second task is achieved by writing three files, containing $10^4$ averages of numbers (1,2,10 and 100) generated according to three distributions: uniform, exponential and Cauchy-Lorentz. The files are read from the Python file that produces 4 histograms, respective to the numbers used for the averages, for each file. Above the histogram for N=100, a fit is made using a Gaussian function for the uniform and exponential distributions, while a Cauchy-Lorentz function is used for its distribution.
###Code
filename = "es1.2/config.ini"
config = configparser.ConfigParser()
config.read(filename)
print(f"Ex1.2: Configuration file '{filename}' successfully parsed")
console = Console()
M = int(config["simulation"]["throws"])
numbers = functions.convert_string(config["simulation"]["numbers"],d_type=int)
logger_debug = bool(config["settings"]["logger_debug"].capitalize())
base_dir = "es1.2/"+str(config["settings"]["base_dir"])
colors = ["blue","orange","green","magenta"]
if logger_debug: print("Ex1.2: Parameters loaded.")
def Gaussian(x,mu,sigma):
x = np.asfarray(x)
return np.exp( -(pow(x-mu,2)) / (2*pow(sigma,2)) )
def Gauss (x, a, mu, sigma):
return a*np.exp(-((x-mu)/sigma)**2)/(np.sqrt(2*np.pi)*sigma)
#def Gaussian(x,mu,sigma):
# x = np.asfarray(x)
# return 1./np.sqrt(2.*np.pi*sigma**2)*np.exp(-0.5*(x-mu)**2/sigma**2)
def Lorentz(x, a, mu, gamma):
x = np.asfarray(x)
return a*gamma/(np.pi*((x-mu)**2.+gamma**2.))
#for filename in os.listdir(base_dir):
filename = "unif.dat"
distrib = "Uniform"
console.print(f"------------------ {filename} ------------------", style="bold red")
lines = open(os.path.join(base_dir,filename),"r+").read().split("\n")[:-1]
matrix = []
i = 0
for line in lines:
#line represent each n (1,2,10,100)
elems = line.split("\t")
#elem represent each number for a fixed n
temp = []
for e in elems[:-1]:
temp.append(float(e))
matrix.append(temp)
f, ax = plt.subplots(1,4,figsize=(12,6))
plt.suptitle(f"Sampling of {distrib} distribution",fontsize=22)
for i,item in enumerate(matrix):
print(i)
if filename == "gauss.dat":
min_range = -50
max_range = 50
else:
min_range = min(item)
max_range = max(item)
print(f"min: {min(item)}\t max: {max(item)}")
print(f"i: {i}, len: {len(matrix)}")
print(f"min range: {min_range}\tmax range: {max_range}")
exec(f"ax[{i}].axvline(np.mean(item), color='k', linestyle='dashed', linewidth=0.5)")
exec(f"bin_heights, bin_borders, _ = ax[{i}].hist(item,label=f'N= {numbers[i]}',bins=100,color=colors[i])")
if i==3:
bin_centers = bin_borders[:-1] + np.diff(bin_borders) / 2
p_opt, p_cov = curve_fit(Gauss,bin_centers,bin_heights,p0=[100,2,1])
print("Optimal parameters: ",p_opt)
#ax[i].plot(bin_centers,bin_heights,color="red")
ax[i].plot(bin_centers,Gauss(bin_centers,*p_opt),label="Fit",linewidth=3)
print("-----------------------------------------------")
lines_labels = [ax.get_legend_handles_labels() for ax in f.axes]
lines, labels = [sum(lol, []) for lol in zip(*lines_labels)]
plt.xlabel("Bin")
plt.ylabel("Frequency")
#plt.legend(lines,labels)
plt.show()
print("\n\n\n")
filename = "exp.dat"
distrib = "Exponential"
console.print(f"------------------ {filename} ------------------", style="bold red")
lines = open(os.path.join(base_dir,filename),"r+").read().split("\n")[:-1]
matrix = []
i = 0
for line in lines:
#line represent each n (1,2,10,100)
elems = line.split("\t")
#elem represent each number for a fixed n
temp = []
for e in elems[:-1]:
temp.append(float(e))
matrix.append(temp)
f, ax = plt.subplots(1,4,figsize=(10,6))
plt.suptitle(f"Sampling of {distrib} distribution",fontsize=22)
for i,item in enumerate(matrix):
print(i)
if filename == "gauss.dat":
min_range = -50
max_range = 50
else:
min_range = min(item)
max_range = max(item)
print(f"min: {min(item)}\t max: {max(item)}")
print(f"i: {i}, len: {len(matrix)}")
print(f"min range: {min_range}\tmax range: {max_range}")
exec(f"ax[{i}].axvline(np.mean(item), color='k', linestyle='dashed', linewidth=0.5)")
exec(f"bin_heights, bin_borders, _ = ax[{i}].hist(item,label=f'N= {numbers[i]}',bins=50,color=colors[i])")
if i==3:
bin_centers = bin_borders[:-1] + np.diff(bin_borders) / 2
p_opt, p_cov = curve_fit(Gauss,bin_centers,bin_heights,p0=[350,2,2])
print("Optimal parameters: ",p_opt)
#ax[i].plot(bin_centers,bin_heights,color="red")
ax[i].plot(bin_centers,Gauss(bin_centers,*p_opt),label="Fit",linewidth=3)
print("-----------------------------------------------")
lines_labels = [ax.get_legend_handles_labels() for ax in f.axes]
lines, labels = [sum(lol, []) for lol in zip(*lines_labels)]
plt.xlabel('Bin')
plt.ylabel("Frequency")
plt.legend(lines,labels)
plt.show()
print("\n\n\n")
filename = "gauss.dat"
distrib = "Cauchy-Lorentz"
console.print(f"------------------ {filename} ------------------", style="bold red")
lines = open(os.path.join(base_dir,filename),"r+").read().split("\n")[:-1]
matrix = []
i = 0
for line in lines:
#line represent each n (1,2,10,100)
elems = line.split("\t")
#elem represent each number for a fixed n
temp = []
for e in elems[:-1]:
temp.append(float(e))
matrix.append(temp)
f, ax = plt.subplots(1,4,figsize=(10,6))
plt.suptitle(f"Sampling of {distrib} distribution",fontsize=22)
for i,item in enumerate(matrix):
print(i)
if filename == "gauss.dat":
min_range = -50
max_range = 50
else:
min_range = min(item)
max_range = max(item)
print(f"min: {min(item)}\t max: {max(item)}")
print(f"i: {i}, len: {len(matrix)}")
print(f"min range: {min_range}\tmax range: {max_range}")
exec(f"bin_heights, bin_borders , _= ax[{i}].hist(item,label=f'N= {numbers[i]}',range=(-50,50),bins=100,color=colors[i])")
exec(f"ax[{i}].axvline(np.mean(item), color='k', linestyle='dashed', linewidth=0.5)")
if i==3:
bin_centers = bin_borders[:-1] + np.diff(bin_borders) / 2
p_opt, p_cov = curve_fit(Lorentz,bin_centers,bin_heights)
print("Optimal parameters: ",p_opt)
#ax[i].plot(bin_centers,bin_heights,color="red")
ax[i].plot(bin_centers,Lorentz(bin_centers,*p_opt),label="Fit",linewidth=2)
print("-----------------------------------------------")
lines_labels = [ax.get_legend_handles_labels() for ax in f.axes]
lines, labels = [sum(lol, []) for lol in zip(*lines_labels)]
plt.xlabel("Bin")
plt.ylabel("Frequency")
plt.legend(lines,labels, loc="upper left")
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 1.3 Task **Simulate** the Buffon’s experiment (see LSN_Lecture_00, supplementary material): A needle of length $L$ is thrown at random onto a horizontal plane ruled with straight lines a distance $d$ (must be $d > L$, but do not use $d\gg L$ otherwise $P\ll 1$) apart. The pro§bability $P$ that the needle will intersect one of these lines is: $P = 2L/\pi d$. This could be used to evaluate $\pi$ from throws of the needle: if the needle is thrown down $N_{thr}$ times and is observed to land on a line $N_{hit}$ of those times, we can make an estimate of $\pi$ from$$\pi = \frac{2L}{Pd} = \lim_{N_{thr} \to \infty}\frac{2LN_{thr}}{N_{hit}d}$$Make a picture of the estimation of $\pi$ and its uncertainty (Standard Deviation of the mean) with a large number of *throws* $M$ as a function of the number of blocks, $N$ (see below: Computing statistical uncertainties). If possible, do not use $\pi$ to evaluate $\pi$. Solution The simulation is composed of a main.cpp, random.h and datablocking function (defined as a shared header).After having initialized the number generator and useful variables for the simulation, the main script computes an external and an internal for loop, which cycle through the number of blocks and the number of throws respectively. In fact, the script simulates the throwing of numerous needles inside a 2D grid, counting the number of times that it hits a grid line against the total number of throws. The simulation of the throws is achieved by generating a random number in the range [0,spacing], where spacing is a configuration parameter, that fixes the x component of one end of the needle. Subsequently, another random number is generated to represent the direction of the needle with respect to its previously-generated end. The other extremity of the needle is then calculated with a simple trigonometric formula. To check whether the needle hits a line in the plane (considered to be on the natural values of the x-axis 1,2,..), the script checks whether the two ends share the same x coordinates (doesn't hit) or not (hits).The estimated value for $\pi$ for each block is saved in a container that is processed in the datablocking method before terminating the simulation.
###Code
import pylab as pl
import math
import numpy as np
from matplotlib import collections as mc
from matplotlib import pyplot as plt
printlines = 30
print("---> Showing {} needles on the plane\n".format(printlines))
planelines = []
planecolors = []
# Load lines
for iter in range(11):
planelines.append([(iter,0),(iter,10)])
planecolors.append([0,0,1,1])
# Load Data
i, x1, y1, x2, y2, state = np.loadtxt("es1.3/outputs/positions.dat",unpack=True) # state = 1 -> hit, state = 0 -> miss
lines = []
colors = []
for iter in range(printlines):
segment = [(x1[iter],y1[iter]),(x2[iter],y2[iter])]
lines.append(segment)
if state[iter]==1:
colors.append([0,1,0,1])
else:
colors.append([1,0,0,1])
plane = mc.LineCollection(planelines, colors=planecolors, linewidths=1)
lc = mc.LineCollection(lines, colors=colors, linewidths=1)
fig, ax = pl.subplots(figsize=(14,6))
ax.add_collection(plane)
ax.add_collection(lc)
ax.autoscale()
ax.margins(0.1)
print("---> Showing estimate of π using datablocking\n")
i, pi, err = np.loadtxt("es1.3/outputs/results.dat",unpack=True)
plt.title("Estimation of PI")
plt.xlabel("Block")
plt.ylabel("PI")
plt.errorbar(i,pi,yerr=err,label="Data",fmt='r.',ecolor="orange",ms=3)
pis = [math.pi for iter in range(len(i))]
plt.plot(i,pis,label='Pi',color="blue")
plt.grid(True)
plt.legend()
plt.plot()
plt.show()
###Output
---> Showing estimate of π using datablocking
|
Section-2-Machine-Learning-Pipeline-Overview/Machine-Learning-Pipeline-Step2-Feature-Engineering.ipynb | ###Markdown
Machine Learning Model Building Pipeline: Feature EngineeringIn the following videos, we will take you through a practical example of each one of the steps in the Machine Learning model building pipeline, which we described in the previous lectures. There will be a notebook for each one of the Machine Learning Pipeline steps:1. Data Analysis2. Feature Engineering3. Feature Selection4. Model Building**This is the notebook for step 2: Feature Engineering**We will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.=================================================================================================== Predicting Sale Price of HousesThe aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses. Why is this important? Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or under-estimated. What is the objective of the machine learning model?We aim to minimise the difference between the real price and the price estimated by our model. We will evaluate model performance using the mean squared error (mse) and the root squared of the mean squared error (rmse). How do I download the dataset?To download the House Price dataset go this website:https://www.kaggle.com/c/house-prices-advanced-regression-techniques/dataScroll down to the bottom of the page, and click on the link 'train.csv', and then click the 'download' blue button towards the right of the screen, to download the dataset. Rename the file as 'houseprice.csv' and save it to a directory of your choice.**Note the following:**- You need to be logged in to Kaggle in order to download the datasets.- You need to accept the terms and conditions of the competition to download the dataset- If you save the file to the same directory where you saved this jupyter notebook, then you can run the code as it is written here.==================================================================================================== House Prices dataset: Feature EngineeringIn the following cells, we will engineer / pre-process the variables of the House Price Dataset from Kaggle. We will engineer the variables so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standarise the values of the variables to the same range Setting the seedIt is important to note that we are engineering variables and pre-processing data with the idea of deploying the model. Therefore, from now on, for each step that includes some element of randomness, it is extremely important that we **set the seed**. This way, we can obtain reproducibility between our research and our development code.This is perhaps one of the most important lessons that you need to take away from this course: **Always set the seeds**.Let's go ahead and load the dataset.
###Code
# to handle datasets
import pandas as pd
import numpy as np
# for plotting
import matplotlib.pyplot as plt
# to divide train and test set
from sklearn.model_selection import train_test_split
# feature scaling
from sklearn.preprocessing import MinMaxScaler
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
import warnings
warnings.simplefilter(action='ignore')
# load dataset
data = pd.read_csv('houseprice.csv')
print(data.shape)
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testBefore beginning to engineer our features, it is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn this parameters only from the train set. This is to avoid over-fitting. **Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(data,
data['SalePrice'],
test_size=0.1,
# we are setting the seed here:
random_state=0)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesFor categorical variables, we will replace missing values with the string "missing".
###Code
# make a list of the categorical variables that contain missing values
vars_with_na = [
var for var in data.columns
if X_train[var].isnull().sum() > 0 and X_train[var].dtypes == 'O'
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# replace missing values with new label: "Missing"
X_train[vars_with_na] = X_train[vars_with_na].fillna('Missing')
X_test[vars_with_na] = X_test[vars_with_na].fillna('Missing')
# check that we have no missing information in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing value indicator variable- and then replace the missing values in the original variable with the mode
###Code
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in data.columns
if X_train[var].isnull().sum() > 0 and X_train[var].dtypes != 'O'
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# replace engineer missing values as we described above
for var in vars_with_na:
# calculate the mode using the train set
mode_val = X_train[var].mode()[0]
# add binary missing indicator (in train and test)
X_train[var+'_na'] = np.where(X_train[var].isnull(), 1, 0)
X_test[var+'_na'] = np.where(X_test[var].isnull(), 1, 0)
# replace missing values by the mode
# (in train and test)
X_train[var] = X_train[var].fillna(mode_val)
X_test[var] = X_test[var].fillna(mode_val)
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[vr for var in vars_with_na if X_test[var].isnull().sum() > 0]
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeWe learned in the previous Jupyter notebook, that there are 4 variables that refer to the years in which the house or the garage were built or remodeled. We will capture the time elapsed between those variables and the year in which the house was sold:
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
###Output
_____no_output_____
###Markdown
Numerical variable transformationIn the previous Jupyter notebook, we observed that the numerical variables are not normally distributed.We will log transform the positive numerical variables in order to get a more Gaussian-like distribution. This tends to help Linear machine learning models.
###Code
for var in ['LotFrontage', 'LotArea', '1stFlrSF', 'GrLivArea', 'SalePrice']:
X_train[var] = np.log(X_train[var])
X_test[var] = np.log(X_test[var])
# check that test set does not contain null values in the engineered variables
[var for var in ['LotFrontage', 'LotArea', '1stFlrSF',
'GrLivArea', 'SalePrice'] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ['LotFrontage', 'LotArea', '1stFlrSF',
'GrLivArea', 'SalePrice'] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Categorical variables Removing rare labelsFirst, we will group those categories within variables that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/feature-engineering-for-machine-learning/?couponCode=UDEMY2018) in Udemy.
###Code
# let's capture the categorical variables in a list
cat_vars = [var for var in X_train.columns if X_train[var].dtype == 'O']
def find_frequent_labels(df, var, rare_perc):
# function finds the labels that are shared by more than
# a certain % of the houses in the dataset
df = df.copy()
tmp = df.groupby(var)['SalePrice'].count() / len(df)
return tmp[tmp > rare_perc].index
for var in cat_vars:
# find the frequent categories
frequent_ls = find_frequent_labels(X_train, var, 0.01)
# replace rare categories by the string "Rare"
X_train[var] = np.where(X_train[var].isin(
frequent_ls), X_train[var], 'Rare')
X_test[var] = np.where(X_test[var].isin(
frequent_ls), X_test[var], 'Rare')
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/feature-engineering-for-machine-learning/?couponCode=UDEMY2018) in Udemy.
###Code
# this function will assign discrete values to the strings of the variables,
# so that the smaller value corresponds to the category that shows the smaller
# mean house sale price
def replace_categories(train, test, var, target):
# order the categories in a variable from that with the lowest
# house sale price, to that with the highest
ordered_labels = train.groupby([var])[target].mean().sort_values().index
# create a dictionary of ordered categories to integer values
ordinal_label = {k: i for i, k in enumerate(ordered_labels, 0)}
# use the dictionary to replace the categorical strings by integers
train[var] = train[var].map(ordinal_label)
test[var] = test[var].map(ordinal_label)
for var in cat_vars:
replace_categories(X_train, X_test, var, 'SalePrice')
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(df, var):
# function plots median house sale price per encoded
# category
df = df.copy()
df.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylabel('SalePrice')
plt.show()
for var in cat_vars:
analyse_vars(X_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning, Neighborhood, and ExterQual. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled or normalised. In the next section, I will scale features to the minimum and maximum values:
###Code
# capture all variables in a list
# except the target and the ID
train_vars = [var for var in X_train.columns if var not in ['Id', 'SalePrice']]
# count number of variables
len(train_vars)
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train[train_vars])
# transform the train and test set
X_train[train_vars] = scaler.transform(X_train[train_vars])
X_test[train_vars] = scaler.transform(X_test[train_vars])
X_train.head()
# let's now save the train and test sets for the next notebook!
X_train.to_csv('xtrain.csv', index=False)
X_test.to_csv('xtest.csv', index=False)
###Output
_____no_output_____
###Markdown
Machine Learning Model Building Pipeline: Feature EngineeringIn the following videos, we will take you through a practical example of each one of the steps in the Machine Learning model building pipeline, which we described in the previous lectures. There will be a notebook for each one of the Machine Learning Pipeline steps:1. Data Analysis2. Feature Engineering3. Feature Selection4. Model Building**This is the notebook for step 2: Feature Engineering**We will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.=================================================================================================== Predicting Sale Price of HousesThe aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses. Why is this important? Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or under-estimated. What is the objective of the machine learning model?We aim to minimise the difference between the real price and the price estimated by our model. We will evaluate model performance using the mean squared error (mse) and the root squared of the mean squared error (rmse). How do I download the dataset?To download the House Price dataset go this website:https://www.kaggle.com/c/house-prices-advanced-regression-techniques/dataScroll down to the bottom of the page, and click on the link 'train.csv', and then click the 'download' blue button towards the right of the screen, to download the dataset. Rename the file as 'houseprice.csv' and save it to a directory of your choice.**Note the following:**- You need to be logged in to Kaggle in order to download the datasets.- You need to accept the terms and conditions of the competition to download the dataset- If you save the file to the same directory where you saved this jupyter notebook, then you can run the code as it is written here.==================================================================================================== House Prices dataset: Feature EngineeringIn the following cells, we will engineer / pre-process the variables of the House Price Dataset from Kaggle. We will engineer the variables so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standarise the values of the variables to the same range Setting the seedIt is important to note that we are engineering variables and pre-processing data with the idea of deploying the model. Therefore, from now on, for each step that includes some element of randomness, it is extremely important that we **set the seed**. This way, we can obtain reproducibility between our research and our development code.This is perhaps one of the most important lessons that you need to take away from this course: **Always set the seeds**.Let's go ahead and load the dataset.
###Code
# to handle datasets
import pandas as pd
import numpy as np
# for plotting
import matplotlib.pyplot as plt
# to divide train and test set
from sklearn.model_selection import train_test_split
# feature scaling
from sklearn.preprocessing import MinMaxScaler
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
import warnings
warnings.simplefilter(action='ignore')
# load dataset
data = pd.read_csv('houseprice.csv')
print(data.shape)
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testBefore beginning to engineer our features, it is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn this parameters only from the train set. This is to avoid over-fitting. **Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(data,
data['SalePrice'],
test_size=0.1,
# we are setting the seed here:
random_state=0)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesFor categorical variables, we will replace missing values with the string "missing".
###Code
# make a list of the categorical variables that contain missing values
vars_with_na = [
var for var in data.columns
if X_train[var].isnull().sum() > 0 and X_train[var].dtypes == 'O'
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# replace missing values with new label: "Missing"
X_train[vars_with_na] = X_train[vars_with_na].fillna('Missing')
X_test[vars_with_na] = X_test[vars_with_na].fillna('Missing')
# check that we have no missing information in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing value indicator variable- and then replace the missing values in the original variable with the mode
###Code
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in data.columns
if X_train[var].isnull().sum() > 0 and X_train[var].dtypes != 'O'
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# replace engineer missing values as we described above
for var in vars_with_na:
# calculate the mode using the train set
mode_val = X_train[var].mode()[0]
# add binary missing indicator (in train and test)
X_train[var+'_na'] = np.where(X_train[var].isnull(), 1, 0)
X_test[var+'_na'] = np.where(X_test[var].isnull(), 1, 0)
# replace missing values by the mode
# (in train and test)
X_train[var] = X_train[var].fillna(mode_val)
X_test[var] = X_test[var].fillna(mode_val)
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[vr for var in vars_with_na if X_test[var].isnull().sum() > 0]
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeWe learned in the previous Jupyter notebook, that there are 4 variables that refer to the years in which the house or the garage were built or remodeled. We will capture the time elapsed between those variables and the year in which the house was sold:
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
###Output
_____no_output_____
###Markdown
Numerical variable transformationIn the previous Jupyter notebook, we observed that the numerical variables are not normally distributed.We will log transform the positive numerical variables in order to get a more Gaussian-like distribution. This tends to help Linear machine learning models.
###Code
for var in ['LotFrontage', 'LotArea', '1stFlrSF', 'GrLivArea', 'SalePrice']:
X_train[var] = np.log(X_train[var])
X_test[var] = np.log(X_test[var])
# check that test set does not contain null values in the engineered variables
[var for var in ['LotFrontage', 'LotArea', '1stFlrSF',
'GrLivArea', 'SalePrice'] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ['LotFrontage', 'LotArea', '1stFlrSF',
'GrLivArea', 'SalePrice'] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Categorical variables Removing rare labelsFirst, we will group those categories within variables that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/feature-engineering-for-machine-learning/?couponCode=UDEMY2018) in Udemy.
###Code
# let's capture the categorical variables in a list
cat_vars = [var for var in X_train.columns if X_train[var].dtype == 'O']
def find_frequent_labels(df, var, rare_perc):
# function finds the labels that are shared by more than
# a certain % of the houses in the dataset
df = df.copy()
tmp = df.groupby(var)['SalePrice'].count() / len(df)
return tmp[tmp > rare_perc].index
for var in cat_vars:
# find the frequent categories
frequent_ls = find_frequent_labels(X_train, var, 0.01)
# replace rare categories by the string "Rare"
X_train[var] = np.where(X_train[var].isin(
frequent_ls), X_train[var], 'Rare')
X_test[var] = np.where(X_test[var].isin(
frequent_ls), X_test[var], 'Rare')
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/feature-engineering-for-machine-learning/?couponCode=UDEMY2018) in Udemy.
###Code
# this function will assign discrete values to the strings of the variables,
# so that the smaller value corresponds to the category that shows the smaller
# mean house sale price
def replace_categories(train, test, var, target):
# order the categories in a variable from that with the lowest
# house sale price, to that with the highest
ordered_labels = train.groupby([var])[target].mean().sort_values().index
# create a dictionary of ordered categories to integer values
ordinal_label = {k: i for i, k in enumerate(ordered_labels, 0)}
# use the dictionary to replace the categorical strings by integers
train[var] = train[var].map(ordinal_label)
test[var] = test[var].map(ordinal_label)
for var in cat_vars:
replace_categories(X_train, X_test, var, 'SalePrice')
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(df, var):
# function plots median house sale price per encoded
# category
df = df.copy()
df.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylabel('SalePrice')
plt.show()
for var in cat_vars:
analyse_vars(X_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning, Neighborhood, and ExterQual. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled or normalised. In the next section, I will scale features to the minimum and maximum values:
###Code
# capture all variables in a list
# except the target and the ID
train_vars = [var for var in X_train.columns if var not in ['Id', 'SalePrice']]
# count number of variables
len(train_vars)
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train[train_vars])
# transform the train and test set
X_train[train_vars] = scaler.transform(X_train[train_vars])
X_test[train_vars] = scaler.transform(X_test[train_vars])
X_train.head()
# let's now save the train and test sets for the next notebook!
X_train.to_csv('xtrain.csv', index=False)
X_test.to_csv('xtest.csv', index=False)
###Output
_____no_output_____
###Markdown
Machine Learning Model Building Pipeline: Feature EngineeringIn the following videos, we will take you through a practical example of each one of the steps in the Machine Learning model building pipeline, which we described in the previous lectures. There will be a notebook for each one of the Machine Learning Pipeline steps:1. Data Analysis2. Feature Engineering3. Feature Selection4. Model Building**This is the notebook for step 2: Feature Engineering**We will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.=================================================================================================== Predicting Sale Price of HousesThe aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses. Why is this important? Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or under-estimated. What is the objective of the machine learning model?We aim to minimise the difference between the real price and the price estimated by our model. We will evaluate model performance using the mean squared error (mse) and the root squared of the mean squared error (rmse). How do I download the dataset?To download the House Price dataset go this website:https://www.kaggle.com/c/house-prices-advanced-regression-techniques/dataScroll down to the bottom of the page, and click on the link 'train.csv', and then click the 'download' blue button towards the right of the screen, to download the dataset. Rename the file as 'houseprice.csv' and save it to a directory of your choice.**Note the following:**- You need to be logged in to Kaggle in order to download the datasets.- You need to accept the terms and conditions of the competition to download the dataset- If you save the file to the same directory where you saved this jupyter notebook, then you can run the code as it is written here.==================================================================================================== House Prices dataset: Feature EngineeringIn the following cells, we will engineer / pre-process the variables of the House Price Dataset from Kaggle. We will engineer the variables so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standarise the values of the variables to the same range Setting the seedIt is important to note that we are engineering variables and pre-processing data with the idea of deploying the model. Therefore, from now on, for each step that includes some element of randomness, it is extremely important that we **set the seed**. This way, we can obtain reproducibility between our research and our development code.This is perhaps one of the most important lessons that you need to take away from this course: **Always set the seeds**.Let's go ahead and load the dataset.
###Code
# to handle datasets
import pandas as pd
import numpy as np
# for plotting
import matplotlib.pyplot as plt
# to divide train and test set
from sklearn.model_selection import train_test_split
# feature scaling
from sklearn.preprocessing import MinMaxScaler
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
import warnings
warnings.simplefilter(action='ignore')
# load dataset
data = pd.read_csv('houseprice.csv')
print(data.shape)
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testBefore beginning to engineer our features, it is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn this parameters only from the train set. This is to avoid over-fitting. **Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(data,
data['SalePrice'],
test_size=0.1,
# we are setting the seed here:
random_state=0)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesFor categorical variables, we will replace missing values with the string "missing".
###Code
# make a list of the categorical variables that contain missing values
vars_with_na = [
var for var in data.columns
if X_train[var].isnull().sum() > 0 and X_train[var].dtypes == 'O'
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# replace missing values with new label: "Missing"
X_train[vars_with_na] = X_train[vars_with_na].fillna('Missing')
X_test[vars_with_na] = X_test[vars_with_na].fillna('Missing')
# check that we have no missing information in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing value indicator variable- and then replace the missing values in the original variable with the mode
###Code
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in data.columns
if X_train[var].isnull().sum() > 0 and X_train[var].dtypes != 'O'
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# replace engineer missing values as we described above
for var in vars_with_na:
# calculate the mode using the train set
mode_val = X_train[var].mode()[0]
# add binary missing indicator (in train and test)
X_train[var+'_na'] = np.where(X_train[var].isnull(), 1, 0)
X_test[var+'_na'] = np.where(X_test[var].isnull(), 1, 0)
# replace missing values by the mode
# (in train and test)
X_train[var] = X_train[var].fillna(mode_val)
X_test[var] = X_test[var].fillna(mode_val)
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[vr for var in vars_with_na if X_test[var].isnull().sum() > 0]
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeWe learned in the previous Jupyter notebook, that there are 4 variables that refer to the years in which the house or the garage were built or remodeled. We will capture the time elapsed between those variables and the year in which the house was sold:
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
###Output
_____no_output_____
###Markdown
Numerical variable transformationIn the previous Jupyter notebook, we observed that the numerical variables are not normally distributed.We will log transform the positive numerical variables in order to get a more Gaussian-like distribution. This tends to help Linear machine learning models.
###Code
for var in ['LotFrontage', 'LotArea', '1stFlrSF', 'GrLivArea', 'SalePrice']:
X_train[var] = np.log(X_train[var])
X_test[var] = np.log(X_test[var])
# check that test set does not contain null values in the engineered variables
[var for var in ['LotFrontage', 'LotArea', '1stFlrSF',
'GrLivArea', 'SalePrice'] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ['LotFrontage', 'LotArea', '1stFlrSF',
'GrLivArea', 'SalePrice'] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Categorical variables Removing rare labelsFirst, we will group those categories within variables that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/feature-engineering-for-machine-learning/?couponCode=UDEMY2018) in Udemy.
###Code
# let's capture the categorical variables in a list
cat_vars = [var for var in X_train.columns if X_train[var].dtype == 'O']
def find_frequent_labels(df, var, rare_perc):
# function finds the labels that are shared by more than
# a certain % of the houses in the dataset
df = df.copy()
tmp = df.groupby(var)['SalePrice'].count() / len(df)
return tmp[tmp > rare_perc].index
for var in cat_vars:
# find the frequent categories
frequent_ls = find_frequent_labels(X_train, var, 0.01)
# replace rare categories by the string "Rare"
X_train[var] = np.where(X_train[var].isin(
frequent_ls), X_train[var], 'Rare')
X_test[var] = np.where(X_test[var].isin(
frequent_ls), X_test[var], 'Rare')
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/feature-engineering-for-machine-learning/?couponCode=UDEMY2018) in Udemy.
###Code
# this function will assign discrete values to the strings of the variables,
# so that the smaller value corresponds to the category that shows the smaller
# mean house sale price
def replace_categories(train, test, var, target):
# order the categories in a variable from that with the lowest
# house sale price, to that with the highest
ordered_labels = train.groupby([var])[target].mean().sort_values().index
# create a dictionary of ordered categories to integer values
ordinal_label = {k: i for i, k in enumerate(ordered_labels, 0)}
# use the dictionary to replace the categorical strings by integers
train[var] = train[var].map(ordinal_label)
test[var] = test[var].map(ordinal_label)
for var in cat_vars:
replace_categories(X_train, X_test, var, 'SalePrice')
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(df, var):
# function plots median house sale price per encoded
# category
df = df.copy()
df.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylabel('SalePrice')
plt.show()
for var in cat_vars:
analyse_vars(X_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning, Neighborhood, and ExterQual. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled or normalised. In the next section, I will scale features to the minimum and maximum values:
###Code
# capture all variables in a list
# except the target and the ID
train_vars = [var for var in X_train.columns if var not in ['Id', 'SalePrice']]
# count number of variables
len(train_vars)
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train[train_vars])
# transform the train and test set
X_train[train_vars] = scaler.transform(X_train[train_vars])
X_test[train_vars] = scaler.transform(X_test[train_vars])
X_train.head()
# let's now save the train and test sets for the next notebook!
X_train.to_csv('xtrain.csv', index=False)
X_test.to_csv('xtest.csv', index=False)
###Output
_____no_output_____
###Markdown
Machine Learning Model Building Pipeline: Feature EngineeringIn the following videos, we will take you through a practical example of each one of the steps in the Machine Learning model building pipeline, which we described in the previous lectures. There will be a notebook for each one of the Machine Learning Pipeline steps:1. Data Analysis2. Feature Engineering3. Feature Selection4. Model Building**This is the notebook for step 2: Feature Engineering**We will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.=================================================================================================== Predicting Sale Price of HousesThe aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses. Why is this important? Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or under-estimated. What is the objective of the machine learning model?We aim to minimise the difference between the real price and the price estimated by our model. We will evaluate model performance using the mean squared error (mse) and the root squared of the mean squared error (rmse). How do I download the dataset?To download the House Price dataset go this website:https://www.kaggle.com/c/house-prices-advanced-regression-techniques/dataScroll down to the bottom of the page, and click on the link 'train.csv', and then click the 'download' blue button towards the right of the screen, to download the dataset. Rename the file as 'houseprice.csv' and save it to a directory of your choice.**Note the following:**- You need to be logged in to Kaggle in order to download the datasets.- You need to accept the terms and conditions of the competition to download the dataset- If you save the file to the same directory where you saved this jupyter notebook, then you can run the code as it is written here.==================================================================================================== House Prices dataset: Feature EngineeringIn the following cells, we will engineer / pre-process the variables of the House Price Dataset from Kaggle. We will engineer the variables so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standarise the values of the variables to the same range Setting the seedIt is important to note that we are engineering variables and pre-processing data with the idea of deploying the model. Therefore, from now on, for each step that includes some element of randomness, it is extremely important that we **set the seed**. This way, we can obtain reproducibility between our research and our development code.This is perhaps one of the most important lessons that you need to take away from this course: **Always set the seeds**.Let's go ahead and load the dataset.
###Code
# to handle datasets
import pandas as pd
import numpy as np
# for plotting
import matplotlib.pyplot as plt
# to divide train and test set
from sklearn.model_selection import train_test_split
# feature scaling
from sklearn.preprocessing import MinMaxScaler
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
import warnings
warnings.simplefilter(action='ignore')
# load dataset
data = pd.read_csv('houseprice.csv')
print(data.shape)
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testBefore beginning to engineer our features, it is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn this parameters only from the train set. This is to avoid over-fitting. **Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(data,
data['SalePrice'],
test_size=0.1,
# we are setting the seed here:
random_state=0)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesFor categorical variables, we will replace missing values with the string "missing".
###Code
# make a list of the categorical variables that contain missing values
vars_with_na = [
var for var in data.columns
if X_train[var].isnull().sum() > 0 and X_train[var].dtypes == 'O'
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# replace missing values with new label: "Missing"
X_train[vars_with_na] = X_train[vars_with_na].fillna('Missing')
X_test[vars_with_na] = X_test[vars_with_na].fillna('Missing')
# check that we have no missing information in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing value indicator variable- and then replace the missing values in the original variable with the mode
###Code
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in data.columns
if X_train[var].isnull().sum() > 0 and X_train[var].dtypes != 'O'
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# replace engineer missing values as we described above
for var in vars_with_na:
# calculate the mode using the train set
mode_val = X_train[var].mode()[0]
# add binary missing indicator (in train and test)
X_train[var+'_na'] = np.where(X_train[var].isnull(), 1, 0)
X_test[var+'_na'] = np.where(X_test[var].isnull(), 1, 0)
# replace missing values by the mode
# (in train and test)
X_train[var] = X_train[var].fillna(mode_val)
X_test[var] = X_test[var].fillna(mode_val)
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeWe learned in the previous Jupyter notebook, that there are 4 variables that refer to the years in which the house or the garage were built or remodeled. We will capture the time elapsed between those variables and the year in which the house was sold:
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
###Output
_____no_output_____
###Markdown
Numerical variable transformationIn the previous Jupyter notebook, we observed that the numerical variables are not normally distributed.We will log transform the positive numerical variables in order to get a more Gaussian-like distribution. This tends to help Linear machine learning models.
###Code
for var in ['LotFrontage', 'LotArea', '1stFlrSF', 'GrLivArea', 'SalePrice']:
X_train[var] = np.log(X_train[var])
X_test[var] = np.log(X_test[var])
# check that test set does not contain null values in the engineered variables
[var for var in ['LotFrontage', 'LotArea', '1stFlrSF',
'GrLivArea', 'SalePrice'] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ['LotFrontage', 'LotArea', '1stFlrSF',
'GrLivArea', 'SalePrice'] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Categorical variables Removing rare labelsFirst, we will group those categories within variables that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/feature-engineering-for-machine-learning/?couponCode=UDEMY2018) in Udemy.
###Code
# let's capture the categorical variables in a list
cat_vars = [var for var in X_train.columns if X_train[var].dtype == 'O']
def find_frequent_labels(df, var, rare_perc):
# function finds the labels that are shared by more than
# a certain % of the houses in the dataset
df = df.copy()
tmp = df.groupby(var)['SalePrice'].count() / len(df)
return tmp[tmp > rare_perc].index
for var in cat_vars:
# find the frequent categories
frequent_ls = find_frequent_labels(X_train, var, 0.01)
# replace rare categories by the string "Rare"
X_train[var] = np.where(X_train[var].isin(
frequent_ls), X_train[var], 'Rare')
X_test[var] = np.where(X_test[var].isin(
frequent_ls), X_test[var], 'Rare')
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/feature-engineering-for-machine-learning/?couponCode=UDEMY2018) in Udemy.
###Code
# this function will assign discrete values to the strings of the variables,
# so that the smaller value corresponds to the category that shows the smaller
# mean house sale price
def replace_categories(train, test, var, target):
# order the categories in a variable from that with the lowest
# house sale price, to that with the highest
ordered_labels = train.groupby([var])[target].mean().sort_values().index
# create a dictionary of ordered categories to integer values
ordinal_label = {k: i for i, k in enumerate(ordered_labels, 0)}
# use the dictionary to replace the categorical strings by integers
train[var] = train[var].map(ordinal_label)
test[var] = test[var].map(ordinal_label)
for var in cat_vars:
replace_categories(X_train, X_test, var, 'SalePrice')
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(df, var):
# function plots median house sale price per encoded
# category
df = df.copy()
df.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylabel('SalePrice')
plt.show()
for var in cat_vars:
analyse_vars(X_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning, Neighborhood, and ExterQual. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled or normalised. In the next section, I will scale features to the minimum and maximum values:
###Code
# capture all variables in a list
# except the target and the ID
train_vars = [var for var in X_train.columns if var not in ['Id', 'SalePrice']]
# count number of variables
len(train_vars)
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train[train_vars])
# transform the train and test set
X_train[train_vars] = scaler.transform(X_train[train_vars])
X_test[train_vars] = scaler.transform(X_test[train_vars])
X_train.head()
# let's now save the train and test sets for the next notebook!
X_train.to_csv('xtrain.csv', index=False)
X_test.to_csv('xtest.csv', index=False)
###Output
_____no_output_____ |
notebooks/02.a_Functions.ipynb | ###Markdown
Functions Functions are encapsulated code blocks. Useful because:* code is reusable (can be used in different parts of the code or even imported from other scripts)* can be documented * can be tested Examples
###Code
import hashlib
def calculate_md5(string):
"""Calculate the md5 for a given string
Args:
string (str) string for which the md5 hex digest is calculated.
can be byte of string instance
Returns:
str: md5 hex digest
"""
m = hashlib.md5()
if isinstance(string, str):
m.update(string.encode("utf-8"))
elif isinstance(string, bytes):
m.update(string)
else:
raise TypeError("This function supports only string input")
return m.hexdigest()
a = """
The path of the righteous man is beset
on all sides by the iniquities of the
selfish and the tyranny of evil men.
"""
calculate_md5(a)
b = b"""
The path of the righteous man is beset
on all sides by the iniquities of the
selfish and the tyranny of evil men.
"""
calculate_md5(b)
###Output
_____no_output_____
###Markdown
SideNote: Personally, I find googles docstring format the most readable. We will use this format in day 3. Example of google style python docstrings can be found [here](https://www.sphinx-doc.org/en/1.5/ext/example_google.html). If you wonder why we test for byte strings and use encode, please read [this](https://realpython.com/python-encodings-guide/) well written blog post about it.Docstring plugin in VSCode does the same thing. Dangerous mistakes using functionsWhat are the outcomes of these lines
###Code
def extend_list_with_three_none(input_list):
"""Extend input_list with 3 * None or
create new list with three nones
"""
input_list += [None, None, None]
return input_list
extend_list_with_three_none()
extend_list_with_three_none()
extend_list_with_three_none()
###Output
_____no_output_____
###Markdown
Fix it !
###Code
def extend_list_with_three_none():
"""Extend input_list with 3 * None
"""
input_list += [None, None, None]
return input_list
###Output
_____no_output_____
###Markdown
Setting up functions properly**Never** set default kwargs in functions to mutable objects as they are initialized once, exist until program is stopped and will behave strangly.
###Code
def extend_list_with_three_none_without_bug(input_list = None):
"""Extend input_list with 3 None"""
if input_list is None:
input_list = []
input_list += [None, None, None]
return input_list
extend_list_with_three_none_without_bug()
extend_list_with_three_none_without_bug()
extend_list_with_three_none_without_bug()
###Output
_____no_output_____
###Markdown
Scopes: local & global
###Code
counter = 0 # global
def increase_counter():
counter += 10 # local
return counter
increase_counter()
counter = 0
def increase_counter(counter):
counter += 10
return
counter = increase_counter(counter)
counter
counter = 0
def increase_counter(counter):
counter += 10
return counter # or directly return counter += 10
counter = increase_counter(counter)
counter
###Output
_____no_output_____
###Markdown
If unsure avoid using global all together!Advantages:* variable can be overwritten in functions without changing code else where unexpectedly* code becomes very readble If you need global (and please avoid using them) ...
###Code
counter = 0
def increase_counter():
"""Ugly!"""
global counter
counter += 10
return
increase_counter()
counter
###Output
_____no_output_____
###Markdown
Biggest danger is counter in the global name space can be overwritten by any routin, hence if you really need to use them (please dont!!) then use namespaces
###Code
import course
course.student_counter = 0
def increase_counter():
"""Still Ugly as not very explicit"""
course.student_counter += 10
return
increase_counter()
course.student_counter
###Output
_____no_output_____
###Markdown
Changing object during iterationthis is also a common mistake using other modules e.g. pandas Example
###Code
students = [
"Anne",
"Ben",
"Chris",
"Don",
"Charles"
]
for student in students:
student = student + " - 5th semster!"
students
###Output
_____no_output_____
###Markdown
How to change the list?
###Code
for pos, student in enumerate(students):
students[pos] = student + " - 5th semster!"
students
students = [
"Anne",
"Ben",
"Chris",
"Don",
"Charles"
]
students
for pos, student in enumerate(students):
if student[0] == "C":
# if student.startswith("C") is True:
students.pop(pos)
students
###Output
_____no_output_____
###Markdown
How to delete all students starting with "C"?
###Code
for pos, student in enumerate(students):
if student[0] == "C":
# if student.startswith("C") is True:
students.pop(pos)
###Output
_____no_output_____
###Markdown
Functions Functions are encapsulated code blocks. Useful because:* code is reusable (can be used in different parts of the code or even imported from other scripts)* can be documented * can be tested Examples
###Code
import hashlib
def calculate_md5(string):
"""Calculate the md5 for a given string
Args:
string (str) string for which the md5 hex digest is calculated.
can be byte of string instance
Returns:
str: md5 hex digest
"""
m = hashlib.md5()
if isinstance(string, str):
m.update(string.encode("utf-8"))
elif isinstance(string, bytes):
m.update(string)
else:
raise TypeError("This function supports only string input")
return m.hexdigest()
a = """
The path of the righteous man is beset
on all sides by the iniquities of the
selfish and the tyranny of evil men.
"""
calculate_md5(a)
b = b"""
The path of the righteous man is beset
on all sides by the iniquities of the
selfish and the tyranny of evil men.
"""
calculate_md5(b)
###Output
_____no_output_____
###Markdown
SideNote: Personally, I find googles docstring format the most readable. We will use this format in day 3. Example of google style python docstrings can be found [here](https://www.sphinx-doc.org/en/1.5/ext/example_google.html). If you wonder why we test for byte strings and use encode, please read [this](https://realpython.com/python-encodings-guide/) well written blog post about it.Docstring plugin in VSCode does the same thing. Dangerous mistakes using functionsWhat are the outcomes of these lines
###Code
def extend_list_with_three_none(input_list):
"""Extend input_list with 3 * None or
create new list with three nones
"""
input_list += [None, None, None]
return input_list
extend_list_with_three_none()
extend_list_with_three_none()
extend_list_with_three_none()
###Output
_____no_output_____
###Markdown
Fix it !
###Code
def extend_list_with_three_none():
"""Extend input_list with 3 * None
"""
input_list += [None, None, None]
return input_list
###Output
_____no_output_____
###Markdown
Setting up functions properly**Never** set default kwargs in functions to mutable objects as they are initialized once, exist until program is stopped and will behave strangly.
###Code
def extend_list_with_three_none_without_bug(input_list = None):
"""Extend input_list with 3 None"""
if input_list is None:
input_list = []
input_list += [None, None, None]
return input_list
extend_list_with_three_none_without_bug()
extend_list_with_three_none_without_bug()
extend_list_with_three_none_without_bug()
###Output
_____no_output_____
###Markdown
Scopes: local & global
###Code
counter = 0 # global
def increase_counter():
counter += 10 # local
return counter
increase_counter()
counter = 0
def increase_counter(counter):
counter += 10
return
counter = increase_counter(counter)
counter
counter = 0
def increase_counter(counter):
counter += 10
return counter # or directly return counter += 10
counter = increase_counter(counter)
counter
###Output
_____no_output_____
###Markdown
If unsure avoid using global all together!Advantages:* variable can be overwritten in functions without changing code else where unexpectedly* code becomes very readble If you need global (and please avoid using them) ...
###Code
counter = 0
def increase_counter():
"""Ugly!"""
global counter
counter += 10
return
increase_counter()
counter
###Output
_____no_output_____
###Markdown
Biggest danger is counter in the global name space can be overwritten by any routin, hence if you really need to use them (please dont!!) then use namespaces
###Code
import course
course.student_counter = 0
def increase_counter():
"""Still Ugly as not very explicit"""
course.student_counter += 10
return
increase_counter()
course.student_counter
###Output
_____no_output_____
###Markdown
Changing object during iterationthis is also a common mistake using other modules e.g. pandas Example
###Code
students = [
"Anne",
"Ben",
"Chris",
"Don",
"Charles"
]
for student in students:
student = student + " - 5th semster!"
students
###Output
_____no_output_____
###Markdown
How to change the list?
###Code
for pos, student in enumerate(students):
students[pos] = student + " - 5th semster!"
students
students = [
"Anne",
"Ben",
"Chris",
"Don",
"Charles"
]
students
for pos, student in enumerate(students):
if student[0] == "C":
# if student.startswith("C") is True:
students.pop(pos)
students
###Output
_____no_output_____
###Markdown
How to delete all students starting with "C"?
###Code
for pos, student in enumerate(students):
if student[0] == "C":
# if student.startswith("C") is True:
students.pop(pos)
###Output
_____no_output_____ |
ML_Pandas_Profiling.ipynb | ###Markdown
###Code
#! pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip
import pandas as pd
from pandas_profiling import ProfileReport
dataset = pd.read_csv('https://raw.githubusercontent.com/Carlosrnes/group_work_ml/main/techscape-ecommerce/train.csv')
dataset = dataset.drop(['Access_ID'], axis=1)
dataset['Date'] = pd.to_datetime(dataset['Date'], format='%d-%b-%y')
dataset.head(5)
profile_report = ProfileReport(dataset, html={"style": {"full_width": True}})
profile_report.to_file("/content/techscape.html")
###Output
_____no_output_____ |
nutrition_analysis.ipynb | ###Markdown
My Home Nutrition Analysis Objective:Over the past few years, I have started paying more attention to my nutrition since nutrition is a large part in maintaining a healthy lifestyle. With this analysis I want to observe how the nutrition of foods within my home varies. In addition to our household foods, I collected some nutritional data for fast food items we commonly enjoy. About The Data:I collected the data by reading the nutrition fact labels on food items within my house. For nutrition on whole fruits and fast food items I took the nutrition from the company's web site. Each item contains its macronutrients and micronutrients with the measures given from the nutrition label. Some items contain a value of `0.99` which is a placeholder I used if the nutrition label said **'< 1g'**. Some items were given a value of zero in certain micronutrients where the micronutrient was labeled with a percentage as opposed to an actual measurement.Not all of the foods in my house were used for this data, I mainly collected the nutrition facts for foods and drinks we regularly consume since there are some items in our pantry which never get touched.**NOTE:**I use the term **"Caution Zone"** to describe foods which I feel should should be consumed cautiously. This is not a definitive categorization since most foods can be consumed in moderation and just because an item did not meet the caution criteria does not mean it is exempt from moderation. The values for the cautious zones I used correspond to being 16-20% of my personal daily intake which I determined from a macronutrient calculator which is linked at the end of this notebook. AnalysisI will begin by importing the modules, importing the excel file with the data, and making sure there's no missing data or incorrect data types
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import plotly as py
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.graph_objects as go
df = pd.read_excel('https://github.com/a-camarillo/nutrition-analysis/blob/master/data/nutrition_facts.xlsx?raw=true')
df.head()
df.isna().sum()
df.dtypes
###Output
_____no_output_____
###Markdown
There are no missing values and all of the data types are as expected so now I'm going to do some quick cleaning of the column names to make things a little easier
###Code
"hide_cell"
df.columns = map(str.lower, df.columns)
df.columns = [column.replace(' ','_') for column in df.columns]
df.columns = [column.replace('(grams)','_g') for column in df.columns]
df.columns = [column.replace('(milligrams)','_mg') for column in df.columns]
df.columns = [column.replace('(micrograms)','_mcg') for column in df.columns]
"hide_cell"
df.columns
"hide_cell"
#One more rename to deal with sneaky folate
df.rename(columns={'folate(micrograms':'folate_mcg'},inplace=True)
df.columns
###Output
_____no_output_____
###Markdown
Before I begin visualizing the data, I am going to create a function for normalization allowing for another comparison of each food item.
###Code
def per_100g(Series):
''' Pass in a macronutrient series and find it's value per 100 grams
for each item '''
value = (Series/df['serving_measurement_g']) * 100
return value
###Output
_____no_output_____
###Markdown
FatFirst macronutrient I want analyze is total fat, I will begin by adding a column for fat per 100 grams, and looking at some of the top results
###Code
"hide_cell"
fat = df[['food_item','serving_size','serving_measurement_g','total_fat_g']]
"hide_cell"
fat.head(2)
"hide_cell"
fat.insert(loc=len(fat.columns),column='fat_per_100g',value=per_100g(fat['total_fat_g']))
"hide_cell"
#create a 'caution' column for foods which contain high fat content
fat.insert(loc=len(fat.columns),column='caution',value='No')
for row in fat.index:
if fat.at[row,'total_fat_g'] >= 10:
fat.at[row,'caution'] = 'Yes'
"hide_cell"
fat_20 = fat.nlargest(20,'fat_per_100g',keep='all')
"hide_input"
plt.figure(figsize=(8,5))
sns.barplot(y='food_item',x='fat_per_100g',data=fat_20)
plt.title('Top 20 Items by Fat Per 100g')
###Output
_____no_output_____
###Markdown
Much to my surprise, Best Foods Mayonnaise and Skippy Peanut Butter are considerably high in fat per 100 grams. As someone who frequently consumes both of these products I will definitely have to monitor my intake to avoid having too much fat in my diet.Another surprise high contender for me is the Ritz Crackers, it's hard to not eat an entire pack of these in one sitting but I might have to reconsider next time a craving hits.Among the top 20 items for fat per 100 grams, the expected fast food items are there but much lower than some of the household items.I want to also look at how the fat per 100 grams compares to a single serving since I am curious to see if the same items are as fatty relative to serving size.
###Code
"hide_input"
fat_fig = px.scatter(fat,x='serving_measurement_g',y='fat_per_100g',hover_data=['food_item'],
labels = {'serving_measurement_g':'Single Serving In Grams','fat_per_100g':'Total Fat Per 100 Grams'},
title='Serving Size vs. Fat Per 100 Grams(Interactive Plot)',
color='caution',
color_discrete_map={'No':'Blue','Yes':'Orange'})
fat_fig.show()
###Output
_____no_output_____
###Markdown
The above plot compares each item's single serving to its respective total fat per 100 grams. Some takeaways from this plot are:**Assuming you adhere to proper serving sizes**, Ritz Crackers and the Sabra Hummus are not as fattening as the previous plot might have indicated. Due to each having a small serving size relative to fat per 100 grams, the actual fat per serving becomes relatively small(about 5g each).Lucerne Cheese Blend is also not as bad as the fat per 100 grams alone might have indicated, however it should still be consumed cautiously since the fat for a single serving is still about.**THE CAUTION ZONE:**I am considering the caution zone(for total fat) to be foods that are shown to have high fat content per serving(greater than or equal to 10g). These can easily be identified as the items around the 10g mark for Total Fat Per 100 Grams and 100 gram Serving Size or greater. Looking all the way to the right is my go to Rubio's choice, the Ancho Citrus Shrimp Burrito. At about 450 grams for the burrito and 10 grams of fat per 100 grams of serving, this burrito packs a whopping 45 grams of fat. This is definitely something to take note of as I have never shied away from eating the whole thing in one sitting.On the opposite side of the graph, but should be noted as well is one of my favorites, Skippy Creamy Peanut Butter. Although its serving size is on the lower end, the high fat per 100 grams reveals a single serving of peanut butter to have about 16 grams of fat. Again, the amount of Peanut Butter I use is something I will have to keep in mind the next I go to make a sandwich.Other culprits of high fat vary from fast food items like fries to some favorite household foods like tortillas.I would also like to reiterate, as I likely will in each section, the caution zone is not definitive and does not mean these items have to be exempt from one's diet rather I feel they should be consumed moderately. CarbohydratesFor Carbohydrates and Protein I will perform analysis similar to Fats.
###Code
"hide_cell"
carbs=df[['food_item','serving_size','serving_measurement_g','total_carbohydrates_g']]
"hide_cell"
carbs.insert(loc=len(carbs.columns),column='carbohydrates_per_100g',value=per_100g(carbs['total_carbohydrates_g']))
"hide_cell"
#create a 'caution' column for foods which contain high fat content
carbs.insert(loc=len(carbs.columns),column='caution',value='No')
for row in carbs.index:
if carbs.at[row,'total_carbohydrates_g'] >= 44:
carbs.at[row,'caution'] = 'Yes'
"hide_cell"
carbs_20 = carbs.nlargest(20,'carbohydrates_per_100g',keep='all')
"hide_input"
plt.figure(figsize=(8,5))
sns.barplot(y='food_item',x='carbohydrates_per_100g',data=carbs_20)
plt.title('Top 20 Items by Carbohydrates Per 100g')
###Output
_____no_output_____
###Markdown
Looking at the carbohydrates per 100 gram the main culprits are, for the most part, as expected. A lot of items in this list are grain based products which are known to have a higher carbohydrate content.The surprise items for this list are the fruit snacks, Fruit By The Foot and Welch's mixed fruit. Being fruit based foods I did not expect these to rank high in carbohydrates.
###Code
"hide_input"
carbs_fig = px.scatter(carbs,x='serving_measurement_g',y='carbohydrates_per_100g',hover_data=['food_item'],
labels = {'serving_measurement_g':'Single Serving In Grams','carbohydrates_per_100g':'Total Carbs Per 100 Grams'},
title='Serving Size vs. Carbohydrates Per 100 Grams(Interactive Plot)',
color='caution',
color_discrete_map={'No':'Blue','Yes':'Orange'})
carbs_fig.show()
###Output
_____no_output_____
###Markdown
The first thing I noted from this visualization is that the Annie's Organic Macaroni and Cheese actually contains more carbs than the Kraft Single Serving. However, the Kraft Macaroni and Cheese does contain more fat so there is the trade-off.The second, more obvious, thing I noted is how few items there are in the cautious zone for carbohydrates. The criteria for the carb cautious zone was for a single serving to contain 44 grams of carbs or more.So despite cereal topping the charts for carbs per 100 grams, **if you adhere to the single serving size** they are actually an adequate source of carbohydrates. SugarsSugars are actually a form of carbohydrates and contribute to overall carbohydrate intake so for the sake of consistency I will analyze sugar content next.
###Code
"hide_cell"
sugars = df[['food_item','serving_size','serving_measurement_g','total_sugars_g','added_sugars_g']]
"hide_cell"
sugars.insert(loc=len(sugars.columns),column='total_sugars_per_100g',value=per_100g(sugars['total_sugars_g']))
sugars.insert(loc=len(sugars.columns),column='added_sugars_per_100g',value=per_100g(sugars['added_sugars_g']))
"hide_cell"
sugars.insert(loc=len(sugars.columns),column='caution',value='No')
for row in sugars.index:
if sugars.at[row,'total_sugars_g'] >= 9:
sugars.at[row,'caution'] = 'Yes'
"hide_cell"
sug_20 = sugars.nlargest(20,'total_sugars_per_100g')
add_sug_20 = sugars.nlargest(20,'added_sugars_per_100g')
"hide_input"
plt.figure(figsize=(11,7.5))
plt.subplot(211)
sns.barplot(data=sug_20,y='food_item',x='total_sugars_per_100g')
plt.title('Top 20 Items by Total Sugars Per 100 Grams')
plt.subplot(212)
sns.barplot(data=add_sug_20,y='food_item',x='added_sugars_per_100g')
plt.title('Top 20 Items by Added Sugars Per 100 Grams')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Interestingly enough, there appears to be quite some overlap not only with the total sugars and added sugars items, but with the high carb items as well. This makes sense since sugar content makes up part of the total carbohydrate content, so any item with both high carbohydrates and high sugar could be a potential red flag.One other thing I find interesting from these charts is the Snickers Candy Bar is second highest in terms of total sugars per 100 grams but does not appear in the top 20 of added sugar per 100 grams. This indicates that in terms of sugar content, a Snickers Bar might actually be better than some other food choices here.
###Code
"hide_cell"
caution=sugars[sugars['caution']=='Yes']
no_caution=sugars[sugars['caution']=='No']
added_sugars=sugars[sugars['added_sugars_g']>0]
"hide_input"
sugars_fig = py.subplots.make_subplots(rows=2,cols=1,
subplot_titles=('Total Sugars',
'Added Sugars'),
vertical_spacing=0.07,
shared_xaxes=True)
sugars_fig.add_trace(go.Scatter(x=no_caution['serving_measurement_g'],y=no_caution['total_sugars_per_100g'],mode='markers',
name='Total Sugars',
hovertemplate=
'Single Serving in Grams: %{x}'+
'<br>Total Sugars Per 100 Grams: %{y}<br>'+
'Food Item: %{text}',
text=no_caution['food_item']),
row=1, col=1)
sugars_fig.add_trace(go.Scatter(x=caution['serving_measurement_g'],y=caution['total_sugars_per_100g'],mode='markers',
name='Caution',
hovertemplate=
'Single Serving in Grams: %{x}'+
'<br>Total Sugars Per 100 Grams: %{y}<br>'+
'Food Item: %{text}',
text=caution['food_item']),
row=1, col=1)
sugars_fig.add_trace(go.Scatter(x=added_sugars['serving_measurement_g'],
y=added_sugars['added_sugars_per_100g'],mode='markers',
name='Added Sugars',
hovertemplate=
'Single Serving in Grams: %{x}'+
'<br>Added Sugars Per 100 Grams: %{y}<br>'+
'Food Item: %{text}',
text=added_sugars['food_item'],
marker={'color':'Orange'}),
row=2, col=1)
sugars_fig.update_layout(title_text='Serving Size vs. Sugars Per 100 Grams')
sugars_fig.update_xaxes(title_text='Single Serving In Grams',row=2,col=1)
sugars_fig.update_yaxes(title_text='Sugars Per 100 Grams(Interactive Plot)')
sugars_fig.show()
###Output
_____no_output_____
###Markdown
The first graph here displays the total sugars per 100 grams versus single serving size in grams with **caution** on any items that contain 9 or more grams of sugar per serving. The second graph contains all of the items which contain added sugars and since added sugars generally want to be avoided, this can be considered its own caution zone.Surprisingly all of the fruit/fruit-based items, except the single clementine, met the criteria for being high in sugar. Although fruits, particularly whole fruits, are generally considered to be essential in a well-balanced diet, those generally consumed in my household are still high in **natural sugar**.The big culprits from this sugar analysis are cereals. In the previous section I noted how cereal can still be considered an adequate source of carbohydrates but after some further investigation an overwhelming amount of total carbohydrates comes from **added sugar**.To get a better understanding, below is a plot showing the relationship between food items' carbohydrates content and their added sugar content.
###Code
"hide_input"
carbs_sug = px.scatter(df,x='total_carbohydrates_g',y='added_sugars_g' ,hover_data=['food_item'],
labels = {'total_carbohydrates_g':'Single Serving Carbs(g)','added_sugars_g':'Single Serving Added Sugars(g)'},
title='Carbohydrate content vs. Added Sugar Content(Interactive Plot)')
carbs_sug.show()
###Output
_____no_output_____
###Markdown
The cereals from this data reside in the middle of the plot and it can be seen that added sugar makes up between 1/3 to 1/2 of total carbohydrates for a single serving. Some huge red flags are the Coca-Cola and Aunt Jemima Syrup which both contain 100% of their carbohydrates from added sugars.The American Heart Association reccomended added sugar intake is no more than 36 grams a day for men and 25 grams for women so it is quite alarming that some of these foods contain half or exceed that amount in just a single serving.[5] ProteinNow for the analysis of protein content. Generally high protein is recommended in a nutritious diet so I will choose to omit a caution zone for the items.However, it is important to note excess protein can actually be detrimental since excess protein is stored as fats.[[6]](https://www.healthline.com/health/too-much-protein)
###Code
"hide_cell"
protein = df[['food_item','serving_size','serving_measurement_g','protein_g']]
"hide_cell"
protein.insert(loc=len(protein.columns),column='protein_per_100g',value=per_100g(protein['protein_g']))
"hide_cell"
protein_20 = protein.nlargest(20,'protein_per_100g',keep='all')
"hide_input"
plt.figure(figsize=(8,5))
sns.barplot(y='food_item',x='protein_per_100g',data=protein_20)
plt.title('Top 20 Items by Protein Per 100g')
"hide_input"
protein_fig = px.scatter(protein,x='serving_measurement_g',y='protein_per_100g',hover_data=['food_item'],
labels = {'serving_measurement_g':'Single Serving In Grams','protein_per_100g':'Total Protein Per 100 Grams'},
title='Serving Size vs. Protein Per 100 Grams(Interactive Plot)')
protein_fig.show()
###Output
_____no_output_____ |
003-Style GAN with ONNX Model.ipynb | ###Markdown
Read Image
###Code
file_path = 'data/images/logo.jpg'
image = cv2.imread(file_path)
plt.imshow(cv2.cvtColor(image,cv2.COLOR_BGR2RGB))
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Prepare Model
###Code
def PrepareNetWork(onnx_model,device):
ie = IECore()
############## Slight Change #############
net = ie.read_network(model = onnx_model)
#########################################
####################### Very Important #############################################
# Check to make sure that the plugin has support for all layers in the model
supported_layers = ie.query_network(net,device_name = device)
unsupported_layers = [layer for layer in supported_layers.values() if layer!= device]
if len(unsupported_layers)>0:
raise Exception(f"Number of unsupported layers {len(unsupported_layers)}")
####################################################################################
exec_net = ie.load_network(network=net, device_name = device)
# Store name of input and output blobs
input_blob = next(iter(net.input_info))
output_blob = next(iter(net.outputs))
# Extract Dimension (n:batch, c:color channel,h: height, w: width )
n, c ,h ,w = net.input_info[input_blob].input_data.shape
print('Extract Model Input Dimension:',n,c,h,w)
return (input_blob,output_blob), exec_net, (n,c,h,w)
def PrepareInputImage(input_path,n,c,h,w):
# height width channels
image = cv2.imread(input_path)
# Resize
in_frame = cv2.resize(image,(w,h))
in_frame = in_frame.transpose((2,0,1)) # Moving color channels to head
in_frame = in_frame.reshape((n,c,h,w))
return image, in_frame
def MakePrediction(execution_network, input_blob, inference_frame):
st_time = time.time()
# Run Inference
result = execution_network.infer(inputs = {input_blob:inference_frame})
ed_time = time.time()
time_sp = ed_time-st_time
FPS = np.round((1/time_sp),4)
print(f"FPS: {FPS}\n")
return FPS,result
# Model Path
onnx_model = 'intel_models/StyleGAN.onnx'
# Device
device = 'CPU' # Options include CPU, GPU, MYRIAD, [HDDL or HETERO] I am not familiar with the last two
# Prepare Network
inputs_outputs, execution_network, dimensions = PrepareNetWork(onnx_model,device)
# Extract Required Input dimension
n,c,h,w = dimensions
# Extract input and output names
input_blob, output_blob = inputs_outputs
# Print Networf Information
print(f"Input_name: {input_blob:>6}\nOutput_name: {output_blob:>5}")
print(f"OpenVINO Engine: {execution_network}")
original_image, inference_frame = PrepareInputImage(file_path,n,c,h,w)
plt.imshow(cv2.cvtColor(original_image,cv2.COLOR_BGR2RGB))
plt.axis('off')
plt.show()
FPS, result = MakePrediction(execution_network,input_blob,inference_frame)
styled_image = result[output_blob]
styled_image = styled_image[0]
styled_image = styled_image.transpose((1,2,0))
styled_image = np.clip(styled_image, 0, 255)
plt.imshow(styled_image/255)
plt.axis('off')
plt.show()
###Output
FPS: 23.2578
|
tracking/notebooks/jupyter/1_petrol_regression_lab.ipynb | ###Markdown
Problem Tutorial 1: Regression ModelWe want to predict the gas consumption (in millions of gallons/year) in 48 of the US statesbased on some key features. These features are * petrol tax (in cents); * per capital income (in US dollars); * paved highway (in miles); and * population of people with driving licences <img src="https://informedinfrastructure.com/wp-content/uploads/2012/06/traffic-jam.jpg" alt="Bank Note " width="600"> <img src="https://miro.medium.com/max/593/1*pfmeGgGM5sxmLBQ5IQfQew.png" alt="Matrix" width="600"> And seems like a bad consumption problem to have ... Solution:Since this is a regression problem where the value is a range of numbers, we can use thecommon Random Forest Algorithm in Scikit-Learn. Most regression models are evaluated withfour [standard evalution metrics](https://medium.com/usf-msds/choosing-the-right-metric-for-machine-learning-models-part-1-a99d7d7414e4): * Mean Absolute Error (MAE)* Mean Squared Error (MSE)* Root Mean Squared Error (RSME)* R-squared (r2)This example is borrowed from this [source](https://stackabuse.com/random-forest-algorithm-with-python-and-scikit-learn/) and modified and modularized for this tutorialAim of this this:1. Understand MLflow Tracking API2. How to use the MLflow Tracking API3. Use the MLflow API to experiment several Runs4. Interpret and observe runs via the MLflow UISome Resources:* https://mlflow.org/docs/latest/python_api/mlflow.html* https://www.saedsayad.com/decision_tree_reg.htm* https://towardsdatascience.com/understanding-random-forest-58381e0602d2* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html* https://towardsdatascience.com/regression-an-explanation-of-regression-metrics-and-what-can-go-wrong-a39a9793d914* https://www.analyticsvidhya.com/blog/2020/04/feature-scaling-machine-learning-normalization-standardization/ Define all the classes and bring them into scope Load the Dataset
###Code
%run ./setup/lab_utils_cls.ipynb
%run ./setup/rfr_regression_cls.ipynb
%run ./setup/rfc_classification_cls.ipynb
%run ./setup/rfr_regression_base_exp_cls.ipynb
# Load and print dataset
dataset = Utils.load_data("https://raw.githubusercontent.com/dmatrix/mlflow-workshop-part-1/master/data/petrol_consumption.csv")
dataset.head(5)
###Output
_____no_output_____
###Markdown
Get descriptive statistics for the features
###Code
dataset.describe()
# Iterate over several runs with different parameters, such as number of trees.
# For excercises, try changing max_depth, number of estimators, and consult the documentation what other tunning parameters
# may affect a better outcome and supply them to the class constructor
# Excercise 1 & 2:
# 1) add key-value parameters to this list
# 2) iterate over the list
# 3) Compute R2 in the RFRModel class
max_depth = 0
for n in range (20, 250, 50):
max_depth = max_depth + 2
params = {"n_estimators": n, "max_depth": max_depth}
rfr = RFRModel.new_instance(params)
(experimentID, runID) = rfr.mlflow_run(dataset, run_name="Regression Petrol Consumption Model", verbose=True)
print("MLflow Run completed with run_id {} and experiment_id {}".format(runID, experimentID))
print("-" * 100)
###Output
----------------------------------------------------------------------------------------------------
Inside MLflow Run with run_id dd7826fa34a7406a96807d988a09d3cf and experiment_id 0
Estimator trees : 20
Mean Absolute Error : 54.48697169946864
Mean Squared Error : 4318.663175664619
Root Mean Squared Error: 65.71653654647831
MLflow Run completed with run_id dd7826fa34a7406a96807d988a09d3cf and experiment_id 0
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
Inside MLflow Run with run_id 2f6876e5e87843c2b4f811ef792e0037 and experiment_id 0
Estimator trees : 70
Mean Absolute Error : 52.76757374558217
Mean Squared Error : 3951.3088720653564
Root Mean Squared Error: 62.85943741448341
MLflow Run completed with run_id 2f6876e5e87843c2b4f811ef792e0037 and experiment_id 0
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
Inside MLflow Run with run_id 85f9c776da1b4b388de31e182399f597 and experiment_id 0
Estimator trees : 120
Mean Absolute Error : 43.90344392875643
Mean Squared Error : 3093.3135350858697
Root Mean Squared Error: 55.61756498702429
MLflow Run completed with run_id 85f9c776da1b4b388de31e182399f597 and experiment_id 0
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
Inside MLflow Run with run_id f23643fcf1df4cd6b2b50dca455e9f61 and experiment_id 0
Estimator trees : 170
Mean Absolute Error : 47.189163738222575
Mean Squared Error : 3414.128056665042
Root Mean Squared Error: 58.43054044474552
MLflow Run completed with run_id f23643fcf1df4cd6b2b50dca455e9f61 and experiment_id 0
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
Inside MLflow Run with run_id f542abc2afee44f5afacb45b1c2b9c7c and experiment_id 0
Estimator trees : 220
Mean Absolute Error : 50.61698484848484
Mean Squared Error : 3649.9045274127625
Root Mean Squared Error: 60.41443972605194
MLflow Run completed with run_id f542abc2afee44f5afacb45b1c2b9c7c and experiment_id 0
----------------------------------------------------------------------------------------------------
###Markdown
**Note**:With 20 trees, the root mean squared error is `64.93`, which is greater than 10 percent of the average petrol consumption i.e., `576.77`. This may sugggest that we have not used enough estimators (trees). Let's Explore the MLflow UI* Add Notes & Tags* Compare Runs pick two best runs* Annotate with descriptions and tags* Evaluate the best run
###Code
!mlflow ui
###Output
[2020-08-11 14:45:24 -0700] [60345] [INFO] Starting gunicorn 20.0.4
[2020-08-11 14:45:24 -0700] [60345] [INFO] Listening at: http://127.0.0.1:5000 (60345)
[2020-08-11 14:45:24 -0700] [60345] [INFO] Using worker: sync
[2020-08-11 14:45:24 -0700] [60348] [INFO] Booting worker with pid: 60348
|
site/en/guide/estimators/index.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Estimators View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This document introduces `tf.estimator`—a high-level TensorFlowAPI that greatly simplifies machine learning programming. Estimators encapsulatethe following actions:* training* evaluation* prediction* export for servingYou may either use the pre-made Estimators we provide or write yourown custom Estimators. All Estimators—whether pre-made or custom—areclasses based on the `tf.estimator.Estimator` class.For a quick example try [Estimator tutorials](../../tutorials/estimators/linear.ipynb). For an overview of the API design, see the [white paper](https://arxiv.org/abs/1708.02637).Note: TensorFlow also includes a deprecated `Estimator` class at`tf.contrib.learn.Estimator`, which you should not use. Estimator advantagesEstimators provide the following benefits:* You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.* Estimators simplify sharing implementations between model developers.* You can develop a state of the art model with high-level intuitive code. In short, it is generally much easier to create models with Estimators than with the low-level TensorFlow APIs.* Estimators are themselves built on `tf.keras.layers`, which simplifies customization.* Estimators build the graph for you.* Estimators provide a safe distributed training loop that controls how and when to: * build the graph * initialize variables * load data * handle exceptions * create checkpoint files and recover from failures * save summaries for TensorBoardWhen writing an application with Estimators, you must separate the data inputpipeline from the model. This separation simplifies experiments withdifferent data sets. Pre-made EstimatorsPre-made Estimators enable you to work at a much higher conceptual level than the base TensorFlow APIs. You no longer have to worry about creating the computational graph or sessions since Estimators handle all the "plumbing" for you. Furthermore, pre-made Estimators let you experiment with different model architectures by making only minimal code changes. `tf.estimator.DNNClassifier`, for example, is a pre-made Estimator class that trains classification models based on dense, feed-forward neural networks. Structure of a pre-made Estimators programA TensorFlow program relying on a pre-made Estimator typically consists of the following four steps: 1. Write one or more dataset importing functions.For example, you might create one function to import the training set and another function to import the test set. Each dataset importing function must return two objects:* a dictionary in which the keys are feature names and the values are Tensors (or SparseTensors) containing the corresponding feature data* a Tensor containing one or more labelsFor example, the following code illustrates the basic skeleton for an input function:```def input_fn(dataset): ... manipulate dataset, extracting the feature dict and the label return feature_dict, label```See [data guide](../../guide/data.md) for details. 2. Define the feature columns.Each `tf.feature_column` identifies a feature name, its type, and any input pre-processing. For example, the following snippet creates three feature columns that hold integer or floating-point data. The first two feature columns simply identify the feature's name and type. The third feature column also specifies a lambda the program will invoke to scale the raw data:``` Define three numeric feature columns.population = tf.feature_column.numeric_column('population')crime_rate = tf.feature_column.numeric_column('crime_rate')median_education = tf.feature_column.numeric_column( 'median_education', normalizer_fn=lambda x: x - global_education_mean)```For further information, it is recommended to check this [tutorial](https://www.tensorflow.org/tutorials/keras/feature_columns). 3. Instantiate the relevant pre-made Estimator.For example, here's a sample instantiation of a pre-made Estimator named `LinearClassifier`:``` Instantiate an estimator, passing the feature columns.estimator = tf.estimator.LinearClassifier( feature_columns=[population, crime_rate, median_education])```For further information, it is recommended to check this [tutorial](https://www.tensorflow.org/tutorials/estimators/linear). 4. Call a training, evaluation, or inference method.For example, all Estimators provide a `train` method, which trains a model.``` `input_fn` is the function created in Step 1estimator.train(input_fn=my_training_set, steps=2000)```You can see an example of this below. Benefits of pre-made EstimatorsPre-made Estimators encode best practices, providing the following benefits:* Best practices for determining where different parts of the computational graph should run, implementing strategies on a single machine or on a cluster.* Best practices for event (summary) writing and universally useful summaries.If you don't use pre-made Estimators, you must implement the preceding features yourself. Custom EstimatorsThe heart of every Estimator—whether pre-made or custom—is its *model function*, which is a method that builds graphs for training, evaluation, and prediction. When you are using a pre-made Estimator, someone else has already implemented the model function. When relying on a custom Estimator, you must write the model function yourself. Recommended workflow1. Assuming a suitable pre-made Estimator exists, use it to build your first model and use its results to establish a baseline.2. Build and test your overall pipeline, including the integrity and reliability of your data with this pre-made Estimator.3. If suitable alternative pre-made Estimators are available, run experiments to determine which pre-made Estimator produces the best results.4. Possibly, further improve your model by building your own custom Estimator.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Create an Estimator from a Keras modelYou can convert existing Keras models to Estimators with `tf.keras.estimator.model_to_estimator`. Doing so enables your Kerasmodel to access Estimator's strengths, such as distributed training.Instantiate a Keras MobileNet V2 model and compile the model with the optimizer, loss, and metrics to train with:
###Code
keras_mobilenet_v2 = tf.keras.applications.MobileNetV2(
input_shape=(160, 160, 3), include_top=False)
estimator_model = tf.keras.Sequential([
keras_mobilenet_v2,
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation='softmax')
])
# Compile the model
estimator_model.compile(
optimizer='adam',
loss='binary_crossentropy',
metric='accuracy')
###Output
_____no_output_____
###Markdown
Create an `Estimator` from the compiled Keras model. The initial model state of the Keras model is preserved in the created `Estimator`:
###Code
est_mobilenet_v2 = tf.keras.estimator.model_to_estimator(keras_model=estimator_model)
###Output
_____no_output_____
###Markdown
Treat the derived `Estimator` as you would with any other `Estimator`.
###Code
IMG_SIZE = 160 # All images will be resized to 160x160
def preprocess(image, label):
image = tf.cast(image, tf.float32)
image = (image/127.5) - 1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
def train_input_fn(batch_size):
data = tfds.load('cats_vs_dogs', as_supervised=True)
train_data = data['train']
train_data = train_data.map(preprocess).shuffle(500).batch(batch_size)
return train_data
###Output
_____no_output_____
###Markdown
To train, call Estimator's train function:
###Code
est_mobilenet_v2.train(input_fn=lambda: train_input_fn(32), steps=500)
###Output
_____no_output_____
###Markdown
Similarly, to evaluate, call the Estimator's evaluate function:
###Code
est_mobilenet_v2.evaluate(input_fn=lambda: train_input_fn(32), steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Estimators View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This document introduces `tf.estimator`—a high-level TensorFlowAPI. Estimators encapsulate the following actions:* training* evaluation* prediction* export for servingYou may either use the pre-made Estimators we provide or write yourown custom Estimators. All Estimators—whether pre-made or custom—areclasses based on the `tf.estimator.Estimator` class.For a quick example try [Estimator tutorials](../../tutorials/estimators/linear.ipynb). For an overview of the API design, see the [white paper](https://arxiv.org/abs/1708.02637). AdvantagesSimilar to a `tf.keras.Model`, an `estimator` is a model-level abstraction. The `tf.estimator` provides some capabilities currently still under development for `tf.keras`. These are: * Parameter server based training * Full [TFX](http://tensorflow.org/tfx) integration. Estimators CapabilitiesEstimators provide the following benefits:* You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.* Estimators provide a safe distributed training loop that controls how and when to: * load data * handle exceptions * create checkpoint files and recover from failures * save summaries for TensorBoardWhen writing an application with Estimators, you must separate the data inputpipeline from the model. This separation simplifies experiments withdifferent data sets. Pre-made EstimatorsPre-made Estimators enable you to work at a much higher conceptual level than the base TensorFlow APIs. You no longer have to worry about creating the computational graph or sessions since Estimators handle all the "plumbing" for you. Furthermore, pre-made Estimators let you experiment with different model architectures by making only minimal code changes. `tf.estimator.DNNClassifier`, for example, is a pre-made Estimator class that trains classification models based on dense, feed-forward neural networks. Structure of a pre-made Estimators programA TensorFlow program relying on a pre-made Estimator typically consists of the following four steps: 1. Write one or more dataset importing functions.For example, you might create one function to import the training set and another function to import the test set. Each dataset importing function must return two objects:* a dictionary in which the keys are feature names and the values are Tensors (or SparseTensors) containing the corresponding feature data* a Tensor containing one or more labelsFor example, the following code illustrates the basic skeleton for an input function:```def input_fn(dataset): ... manipulate dataset, extracting the feature dict and the label return feature_dict, label```See [data guide](../../guide/data.md) for details. 2. Define the feature columns.Each `tf.feature_column` identifies a feature name, its type, and any input pre-processing. For example, the following snippet creates three feature columns that hold integer or floating-point data. The first two feature columns simply identify the feature's name and type. The third feature column also specifies a lambda the program will invoke to scale the raw data:``` Define three numeric feature columns.population = tf.feature_column.numeric_column('population')crime_rate = tf.feature_column.numeric_column('crime_rate')median_education = tf.feature_column.numeric_column( 'median_education', normalizer_fn=lambda x: x - global_education_mean)```For further information, see the [feature columns tutorial](https://www.tensorflow.org/tutorials/keras/feature_columns). 3. Instantiate the relevant pre-made Estimator.For example, here's a sample instantiation of a pre-made Estimator named `LinearClassifier`:``` Instantiate an estimator, passing the feature columns.estimator = tf.estimator.LinearClassifier( feature_columns=[population, crime_rate, median_education])```For further information, see the [linear classifier tutorial](https://www.tensorflow.org/tutorials/estimators/linear). 4. Call a training, evaluation, or inference method.For example, all Estimators provide a `train` method, which trains a model.``` `input_fn` is the function created in Step 1estimator.train(input_fn=my_training_set, steps=2000)```You can see an example of this below. Benefits of pre-made EstimatorsPre-made Estimators encode best practices, providing the following benefits:* Best practices for determining where different parts of the computational graph should run, implementing strategies on a single machine or on a cluster.* Best practices for event (summary) writing and universally useful summaries.If you don't use pre-made Estimators, you must implement the preceding features yourself. Custom EstimatorsThe heart of every Estimator—whether pre-made or custom—is its *model function*, which is a method that builds graphs for training, evaluation, and prediction. When you are using a pre-made Estimator, someone else has already implemented the model function. When relying on a custom Estimator, you must write the model function yourself. Recommended workflow1. Assuming a suitable pre-made Estimator exists, use it to build your first model and use its results to establish a baseline.2. Build and test your overall pipeline, including the integrity and reliability of your data with this pre-made Estimator.3. If suitable alternative pre-made Estimators are available, run experiments to determine which pre-made Estimator produces the best results.4. Possibly, further improve your model by building your own custom Estimator.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Create an Estimator from a Keras modelYou can convert existing Keras models to Estimators with `tf.keras.estimator.model_to_estimator`. Doing so enables your Kerasmodel to access Estimator's strengths, such as distributed training.Instantiate a Keras MobileNet V2 model and compile the model with the optimizer, loss, and metrics to train with:
###Code
keras_mobilenet_v2 = tf.keras.applications.MobileNetV2(
input_shape=(160, 160, 3), include_top=False)
estimator_model = tf.keras.Sequential([
keras_mobilenet_v2,
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation='softmax')
])
# Compile the model
estimator_model.compile(
optimizer='adam',
loss='binary_crossentropy',
metric='accuracy')
###Output
_____no_output_____
###Markdown
Create an `Estimator` from the compiled Keras model. The initial model state of the Keras model is preserved in the created `Estimator`:
###Code
est_mobilenet_v2 = tf.keras.estimator.model_to_estimator(keras_model=estimator_model)
###Output
_____no_output_____
###Markdown
Treat the derived `Estimator` as you would with any other `Estimator`.
###Code
IMG_SIZE = 160 # All images will be resized to 160x160
def preprocess(image, label):
image = tf.cast(image, tf.float32)
image = (image/127.5) - 1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
def train_input_fn(batch_size):
data = tfds.load('cats_vs_dogs', as_supervised=True)
train_data = data['train']
train_data = train_data.map(preprocess).shuffle(500).batch(batch_size)
return train_data
###Output
_____no_output_____
###Markdown
To train, call Estimator's train function:
###Code
est_mobilenet_v2.train(input_fn=lambda: train_input_fn(32), steps=500)
###Output
_____no_output_____
###Markdown
Similarly, to evaluate, call the Estimator's evaluate function:
###Code
est_mobilenet_v2.evaluate(input_fn=lambda: train_input_fn(32), steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Estimators View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This document introduces `tf.estimator`—a high-level TensorFlowAPI that greatly simplifies machine learning programming. Estimators encapsulatethe following actions:* training* evaluation* prediction* export for servingYou may either use the pre-made Estimators we provide or write yourown custom Estimators. All Estimators—whether pre-made or custom—areclasses based on the `tf.estimator.Estimator` class.For a quick example try [Estimator tutorials](../../tutorials/estimators/linear.ipynb). For an overview of the API design, see the [white paper](https://arxiv.org/abs/1708.02637).Note: TensorFlow also includes a deprecated `Estimator` class at`tf.contrib.learn.Estimator`, which you should not use. Estimator advantagesEstimators provide the following benefits:* You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.* Estimators simplify sharing implementations between model developers.* You can develop a state of the art model with high-level intuitive code. In short, it is generally much easier to create models with Estimators than with the low-level TensorFlow APIs.* Estimators are themselves built on `tf.keras.layers`, which simplifies customization.* Estimators build the graph for you.* Estimators provide a safe distributed training loop that controls how and when to: * build the graph * initialize variables * load data * handle exceptions * create checkpoint files and recover from failures * save summaries for TensorBoardWhen writing an application with Estimators, you must separate the data inputpipeline from the model. This separation simplifies experiments withdifferent data sets. Pre-made EstimatorsPre-made Estimators enable you to work at a much higher conceptual level than the base TensorFlow APIs. You no longer have to worry about creating the computational graph or sessions since Estimators handle all the "plumbing" for you. Furthermore, pre-made Estimators let you experiment with different model architectures by making only minimal code changes. `tf.estimator.DNNClassifier`, for example, is a pre-made Estimator class that trains classification models based on dense, feed-forward neural networks. Structure of a pre-made Estimators programA TensorFlow program relying on a pre-made Estimator typically consists of the following four steps: 1. Write one or more dataset importing functions.For example, you might create one function to import the training set and another function to import the test set. Each dataset importing function must return two objects:* a dictionary in which the keys are feature names and the values are Tensors (or SparseTensors) containing the corresponding feature data* a Tensor containing one or more labelsFor example, the following code illustrates the basic skeleton for an input function:```def input_fn(dataset): ... manipulate dataset, extracting the feature dict and the label return feature_dict, label```See [data guide](../../guide/data.md) for details. 2. Define the feature columns.Each `tf.feature_column` identifies a feature name, its type, and any input pre-processing. For example, the following snippet creates three feature columns that hold integer or floating-point data. The first two feature columns simply identify the feature's name and type. The third feature column also specifies a lambda the program will invoke to scale the raw data:``` Define three numeric feature columns.population = tf.feature_column.numeric_column('population')crime_rate = tf.feature_column.numeric_column('crime_rate')median_education = tf.feature_column.numeric_column( 'median_education', normalizer_fn=lambda x: x - global_education_mean)```For further information, it is recommended to check this [tutorial](https://www.tensorflow.org/tutorials/keras/feature_columns). 3. Instantiate the relevant pre-made Estimator.For example, here's a sample instantiation of a pre-made Estimator named `LinearClassifier`:``` Instantiate an estimator, passing the feature columns.estimator = tf.estimator.LinearClassifier( feature_columns=[population, crime_rate, median_education])```For further information, it is recommended to check this [tutorial](https://www.tensorflow.org/tutorials/estimators/linear). 4. Call a training, evaluation, or inference method.For example, all Estimators provide a `train` method, which trains a model.``` `input_fn` is the function created in Step 1estimator.train(input_fn=my_training_set, steps=2000)```You can see an example of this below. Benefits of pre-made EstimatorsPre-made Estimators encode best practices, providing the following benefits:* Best practices for determining where different parts of the computational graph should run, implementing strategies on a single machine or on a cluster.* Best practices for event (summary) writing and universally useful summaries.If you don't use pre-made Estimators, you must implement the preceding features yourself. Custom EstimatorsThe heart of every Estimator—whether pre-made or custom—is its *model function*, which is a method that builds graphs for training, evaluation, and prediction. When you are using a pre-made Estimator, someone else has already implemented the model function. When relying on a custom Estimator, you must write the model function yourself. Recommended workflow1. Assuming a suitable pre-made Estimator exists, use it to build your first model and use its results to establish a baseline.2. Build and test your overall pipeline, including the integrity and reliability of your data with this pre-made Estimator.3. If suitable alternative pre-made Estimators are available, run experiments to determine which pre-made Estimator produces the best results.4. Possibly, further improve your model by building your own custom Estimator.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Create an Estimator from a Keras modelYou can convert existing Keras models to Estimators with `tf.keras.estimator.model_to_estimator`. Doing so enables your Kerasmodel to access Estimator's strengths, such as distributed training.Instantiate a Keras MobileNet V2 model and compile the model with the optimizer, loss, and metrics to train with:
###Code
keras_mobilenet_v2 = tf.keras.applications.MobileNetV2(
input_shape=(160, 160, 3), include_top=False)
estimator_model = tf.keras.Sequential([
keras_mobilenet_v2,
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation='softmax')
])
# Compile the model
estimator_model.compile(
optimizer='adam',
loss='binary_crossentropy',
metric='accuracy')
###Output
_____no_output_____
###Markdown
Create an `Estimator` from the compiled Keras model. The initial model state of the Keras model is preserved in the created `Estimator`:
###Code
est_mobilenet_v2 = tf.keras.estimator.model_to_estimator(keras_model=estimator_model)
###Output
_____no_output_____
###Markdown
Treat the derived `Estimator` as you would with any other `Estimator`.
###Code
IMG_SIZE = 160 # All images will be resized to 160x160
def preprocess(image, label):
image = tf.cast(image, tf.float32)
image = (image/127.5) - 1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
def train_input_fn(batch_size):
data = tfds.load('cats_vs_dogs', as_supervised=True)
train_data = data['train']
train_data = train_data.map(preprocess).shuffle(500).batch(batch_size)
return train_data
###Output
_____no_output_____
###Markdown
To train, call Estimator's train function:
###Code
est_mobilenet_v2.train(input_fn=lambda: train_input_fn(32), steps=500)
###Output
_____no_output_____
###Markdown
Similarly, to evaluate, call the Estimator's evaluate function:
###Code
est_mobilenet_v2.evaluate(input_fn=lambda: train_input_fn(32), steps=10)
###Output
_____no_output_____ |
examples/nlp/token_classification/PunctuationWithBERT.ipynb | ###Markdown
Download and preprocess the data In this notebook we're going to use a subset of English examples from the [Tatoeba collection of sentences](https://tatoeba.org/eng), set NUM_SAMPLES=-1 and consider including other datasets to improve the performance of the model. Use [NeMo/examples/nlp/token_classification/get_tatoeba_data.py](https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/token_classification/get_tatoeba_data.py) to download and preprocess the Tatoeba data.
###Code
# This should take about a minute since the data is already downloaded in the previous step
! python get_tatoeba_data.py --data_dir $DATA_DIR --num_sample $NUM_SAMPLES
###Output
_____no_output_____
###Markdown
After the previous step, you should have a `DATA_DIR` folder with the following files:- labels_train.txt- labels_dev.txt- text_train.txt- text_dev.txtThe format of the data described in NeMo docs. Define Neural Modules
###Code
# Instantiate neural factory with supported backend
nf = nemo.core.NeuralModuleFactory(
# If you're training with multiple GPUs, you should handle this value with
# something like argparse. See examples/nlp/token_classification.py for an example.
local_rank=None,
# If you're training with mixed precision, this should be set to mxprO1 or mxprO2.
# See https://nvidia.github.io/apex/amp.html#opt-levels for more details.
optimization_level="O1",
# Define path to the directory you want to store your results
log_dir=WORK_DIR,
# If you're training with multiple GPUs, this should be set to
# nemo.core.DeviceType.AllGpu
placement=nemo.core.DeviceType.GPU)
# If you're using a standard BERT model, you should do it like this. To see the full
# list of MegatronBERT/BERT/ALBERT/RoBERTa model names, call nemo_nlp.nm.trainables.get_pretrained_lm_models_list()
bert_model = nemo_nlp.nm.trainables.get_pretrained_lm_model(
pretrained_model_name=PRETRAINED_BERT_MODEL)
tokenizer = nemo.collections.nlp.data.tokenizers.get_tokenizer(
tokenizer_name="nemobert",
pretrained_model_name=PRETRAINED_BERT_MODEL)
###Output
_____no_output_____
###Markdown
Describe training DAG
###Code
train_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_train.txt'),
label_file=os.path.join(DATA_DIR, 'labels_train.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE)
punct_label_ids = train_data_layer.dataset.punct_label_ids
capit_label_ids = train_data_layer.dataset.capit_label_ids
hidden_size = bert_model.hidden_size
# Define classifier for Punctuation and Capitalization tasks
classifier = PunctCapitTokenClassifier(
hidden_size=hidden_size,
punct_num_classes=len(punct_label_ids),
capit_num_classes=len(capit_label_ids),
dropout=0.1,
punct_num_layers=3,
capit_num_layers=2,
)
# If you don't want to use weighted loss for Punctuation task, use class_weights=None
punct_label_freqs = train_data_layer.dataset.punct_label_frequencies
class_weights = calc_class_weights(punct_label_freqs)
# define loss
punct_loss = CrossEntropyLossNM(logits_ndim=3, weight=class_weights)
capit_loss = CrossEntropyLossNM(logits_ndim=3)
task_loss = LossAggregatorNM(num_inputs=2)
input_ids, input_type_ids, input_mask, loss_mask, subtokens_mask, punct_labels, capit_labels = train_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits, capit_logits = classifier(hidden_states=hidden_states)
punct_loss = punct_loss(
logits=punct_logits,
labels=punct_labels,
loss_mask=loss_mask)
capit_loss = capit_loss(
logits=capit_logits,
labels=capit_labels,
loss_mask=loss_mask)
task_loss = task_loss(
loss_1=punct_loss,
loss_2=capit_loss)
###Output
_____no_output_____
###Markdown
Describe evaluation DAG
###Code
# Note that you need to specify punct_label_ids and capit_label_ids - mapping form labels to label_ids generated
# during creation of the train_data_layer to make sure that the mapping is correct in case some of the labels from
# the train set are missing in the dev set.
eval_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_dev.txt'),
label_file=os.path.join(DATA_DIR, 'labels_dev.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
punct_label_ids=punct_label_ids,
capit_label_ids=capit_label_ids)
eval_input_ids, eval_input_type_ids, eval_input_mask, _, eval_subtokens_mask, eval_punct_labels, eval_capit_labels\
= eval_data_layer()
hidden_states = bert_model(
input_ids=eval_input_ids,
token_type_ids=eval_input_type_ids,
attention_mask=eval_input_mask)
eval_punct_logits, eval_capit_logits = classifier(hidden_states=hidden_states)
###Output
_____no_output_____
###Markdown
Create callbacks
###Code
callback_train = nemo.core.SimpleLossLoggerCallback(
tensors=[task_loss, punct_loss, capit_loss, punct_logits, capit_logits],
print_func=lambda x: logging.info("Loss: {:.3f}".format(x[0].item())),
step_freq=STEP_FREQ)
train_data_size = len(train_data_layer)
# If you're training on multiple GPUs, this should be
# train_data_size / (batch_size * batches_per_step * num_gpus)
steps_per_epoch = int(train_data_size / (BATCHES_PER_STEP * BATCH_SIZE))
print ('Number of steps per epoch: ', steps_per_epoch)
# Callback to evaluate the model
callback_eval = nemo.core.EvaluatorCallback(
eval_tensors=[eval_punct_logits,
eval_capit_logits,
eval_punct_labels,
eval_capit_labels,
eval_subtokens_mask],
user_iter_callback=lambda x, y: eval_iter_callback(x, y),
user_epochs_done_callback=lambda x: eval_epochs_done_callback(x,
punct_label_ids,
capit_label_ids),
eval_step=steps_per_epoch)
# Callback to store checkpoints
ckpt_callback = nemo.core.CheckpointCallback(
folder=nf.checkpoint_dir,
step_freq=STEP_FREQ)
###Output
_____no_output_____
###Markdown
Training
###Code
lr_policy = WarmupAnnealing(NUM_EPOCHS * steps_per_epoch,
warmup_ratio=LR_WARMUP_PROPORTION)
nf.train(tensors_to_optimize=[task_loss],
callbacks=[callback_train, callback_eval, ckpt_callback],
lr_policy=lr_policy,
batches_per_step=BATCHES_PER_STEP,
optimizer=OPTIMIZER,
optimization_params={"num_epochs": NUM_EPOCHS,
"lr": LEARNING_RATE})
###Output
_____no_output_____
###Markdown
10 epochs of training on the subset of data, should take about 20 minutes on a single V100 GPU.The model performance should be similar to the following: precision recall f1-score support O 1.00 0.99 0.99 137268 , 0.58 0.95 0.72 2347 . 0.99 1.00 1.00 19078 ? 0.98 0.99 0.99 1151 accuracy 0.99 159844 macro avg 0.89 0.98 0.92 159844 weighted avg 0.99 0.99 0.99 159844 precision recall f1-score support O 1.00 1.00 1.00 136244 U 1.00 0.99 0.99 23600 accuracy 1.00 159844 macro avg 1.00 1.00 1.00 159844 weighted avg 1.00 1.00 1.00 159844 Inference
###Code
# Define the list of queiries for inference
queries = ['can i help you',
'yes please',
'we bought four shirts from the nvidia gear store in santa clara',
'we bought four shirts one mug and ten thousand titan rtx graphics cards',
'the more you buy the more you save']
infer_data_layer = nemo_nlp.nm.data_layers.BertTokenClassificationInferDataLayer(
queries=queries,
tokenizer=tokenizer,
max_seq_length=MAX_SEQ_LENGTH,
batch_size=1)
input_ids, input_type_ids, input_mask, _, subtokens_mask = infer_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits, capit_logits = classifier(hidden_states=hidden_states)
evaluated_tensors = nf.infer(tensors=[punct_logits, capit_logits, subtokens_mask],
checkpoint_dir=WORK_DIR + '/checkpoints')
# helper functions
def concatenate(lists):
return np.concatenate([t.cpu() for t in lists])
punct_ids_to_labels = {punct_label_ids[k]: k for k in punct_label_ids}
capit_ids_to_labels = {capit_label_ids[k]: k for k in capit_label_ids}
punct_logits, capit_logits, subtokens_mask = [concatenate(tensors) for tensors in evaluated_tensors]
punct_preds = np.argmax(punct_logits, axis=2)
capit_preds = np.argmax(capit_logits, axis=2)
for i, query in enumerate(queries):
print(f'Query: {query}')
punct_pred = punct_preds[i][subtokens_mask[i] > 0.5]
capit_pred = capit_preds[i][subtokens_mask[i] > 0.5]
words = query.strip().split()
if len(punct_pred) != len(words) or len(capit_pred) != len(words):
raise ValueError('Pred and words must be of the same length')
output = ''
for j, w in enumerate(words):
punct_label = punct_ids_to_labels[punct_pred[j]]
capit_label = capit_ids_to_labels[capit_pred[j]]
if capit_label != 'O':
w = w.capitalize()
output += w
if punct_label != 'O':
output += punct_label
output += ' '
print(f'Combined: {output.strip()}\n')
###Output
_____no_output_____
###Markdown
Download and preprocess the data In this notebook we're going to use a subset of English examples from the [Tatoeba collection of sentences](https://tatoeba.org/eng), set NUM_SAMPLES=-1 and consider including other datasets to improve the performance of the model. Use [NeMo/nemo/collections/nlp/data/scripts/get_tatoeba_data.py](https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/scripts/get_tatoeba.py) to download and preprocess the Tatoeba data.
###Code
# This should take about a minute since the data is already downloaded in the previous step
! python ../scripts/get_tatoeba.py --data_dir $DATA_DIR --num_sample $NUM_SAMPLES
###Output
_____no_output_____
###Markdown
After the previous step, you should have a `DATA_DIR` folder with the following files:- labels_train.txt- labels_dev.txt- text_train.txt- text_dev.txtThe format of the data described in NeMo docs. Define Neural Modules
###Code
# Instantiate neural factory with supported backend
nf = nemo.core.NeuralModuleFactory(
backend=nemo.core.Backend.PyTorch,
# If you're training with multiple GPUs, you should handle this value with
# something like argparse. See examples/nlp/token_classification.py for an example.
local_rank=None,
# If you're training with mixed precision, this should be set to mxprO1 or mxprO2.
# See https://nvidia.github.io/apex/amp.html#opt-levels for more details.
optimization_level="O1",
# Define path to the directory you want to store your results
log_dir=WORK_DIR,
# If you're training with multiple GPUs, this should be set to
# nemo.core.DeviceType.AllGpu
placement=nemo.core.DeviceType.GPU)
# If you're using a standard BERT model, you should do it like this. To see the full
# list of BERT model names, check out nemo_nlp.huggingface.BERT.list_pretrained_models()
tokenizer = NemoBertTokenizer(pretrained_model=PRETRAINED_BERT_MODEL)
bert_model = nemo_nlp.nm.trainables.huggingface.BERT(pretrained_model_name=PRETRAINED_BERT_MODEL)
###Output
_____no_output_____
###Markdown
Describe training DAG
###Code
train_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_train.txt'),
label_file=os.path.join(DATA_DIR, 'labels_train.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE)
punct_label_ids = train_data_layer.dataset.punct_label_ids
capit_label_ids = train_data_layer.dataset.capit_label_ids
# Define classifier for Punctuation and Capitalization tasks
punct_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(punct_label_ids),
dropout=CLASSIFICATION_DROPOUT,
num_layers=PUNCT_NUM_FC_LAYERS,
name='Punctuation')
capit_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(capit_label_ids),
dropout=CLASSIFICATION_DROPOUT,
name='Capitalization')
# If you don't want to use weighted loss for Punctuation task, use class_weights=None
punct_label_freqs = train_data_layer.dataset.punct_label_frequencies
class_weights = calc_class_weights(punct_label_freqs)
# define loss
punct_loss = CrossEntropyLossNM(logits_dim=3, weight=class_weights)
capit_loss = CrossEntropyLossNM(logits_dim=3)
task_loss = LossAggregatorNM(num_inputs=2)
input_ids, input_type_ids, input_mask, loss_mask, subtokens_mask, punct_labels, capit_labels = train_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
punct_loss = punct_loss(
logits=punct_logits,
labels=punct_labels,
loss_mask=loss_mask)
capit_loss = capit_loss(
logits=capit_logits,
labels=capit_labels,
loss_mask=loss_mask)
task_loss = task_loss(
loss_1=punct_loss,
loss_2=capit_loss)
###Output
_____no_output_____
###Markdown
Describe evaluation DAG
###Code
# Note that you need to specify punct_label_ids and capit_label_ids - mapping form labels to label_ids generated
# during creation of the train_data_layer to make sure that the mapping is correct in case some of the labels from
# the train set are missing in the dev set.
eval_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_dev.txt'),
label_file=os.path.join(DATA_DIR, 'labels_dev.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
punct_label_ids=punct_label_ids,
capit_label_ids=capit_label_ids)
eval_input_ids, eval_input_type_ids, eval_input_mask, _, eval_subtokens_mask, eval_punct_labels, eval_capit_labels\
= eval_data_layer()
hidden_states = bert_model(
input_ids=eval_input_ids,
token_type_ids=eval_input_type_ids,
attention_mask=eval_input_mask)
eval_punct_logits = punct_classifier(hidden_states=hidden_states)
eval_capit_logits = capit_classifier(hidden_states=hidden_states)
###Output
_____no_output_____
###Markdown
Create callbacks
###Code
callback_train = nemo.core.SimpleLossLoggerCallback(
tensors=[task_loss, punct_loss, capit_loss, punct_logits, capit_logits],
print_func=lambda x: print("Loss: {:.3f}".format(x[0].item())),
step_freq=STEP_FREQ)
train_data_size = len(train_data_layer)
# If you're training on multiple GPUs, this should be
# train_data_size / (batch_size * batches_per_step * num_gpus)
steps_per_epoch = int(train_data_size / (BATCHES_PER_STEP * BATCH_SIZE))
print ('Number of steps per epoch: ', steps_per_epoch)
# Callback to evaluate the model
callback_eval = nemo.core.EvaluatorCallback(
eval_tensors=[eval_punct_logits,
eval_capit_logits,
eval_punct_labels,
eval_capit_labels,
eval_subtokens_mask],
user_iter_callback=lambda x, y: eval_iter_callback(x, y),
user_epochs_done_callback=lambda x: eval_epochs_done_callback(x,
punct_label_ids,
capit_label_ids),
eval_step=steps_per_epoch)
# Callback to store checkpoints
ckpt_callback = nemo.core.CheckpointCallback(
folder=nf.checkpoint_dir,
step_freq=STEP_FREQ)
###Output
_____no_output_____
###Markdown
Training
###Code
lr_policy = WarmupAnnealing(NUM_EPOCHS * steps_per_epoch,
warmup_ratio=LR_WARMUP_PROPORTION)
nf.train(tensors_to_optimize=[task_loss],
callbacks=[callback_train, callback_eval, ckpt_callback],
lr_policy=lr_policy,
batches_per_step=BATCHES_PER_STEP,
optimizer=OPTIMIZER,
optimization_params={"num_epochs": NUM_EPOCHS,
"lr": LEARNING_RATE})
###Output
_____no_output_____
###Markdown
10 epochs of training on the subset of data, should take about 20 minutes on a single V100 GPU.The model performance should be similar to the following: precision recall f1-score support O 1.00 0.99 0.99 137268 , 0.58 0.95 0.72 2347 . 0.99 1.00 1.00 19078 ? 0.98 0.99 0.99 1151 accuracy 0.99 159844 macro avg 0.89 0.98 0.92 159844 weighted avg 0.99 0.99 0.99 159844 precision recall f1-score support O 1.00 1.00 1.00 136244 U 1.00 0.99 0.99 23600 accuracy 1.00 159844 macro avg 1.00 1.00 1.00 159844 weighted avg 1.00 1.00 1.00 159844 Inference
###Code
# Define the list of queiries for inference
queries = ['can i help you',
'yes please',
'we bought four shirts from the nvidia gear store in santa clara',
'we bought four shirts one mug and ten thousand titan rtx graphics cards',
'the more you buy the more you save']
infer_data_layer = nemo_nlp.nm.data_layers.BertTokenClassificationInferDataLayer(
queries=queries,
tokenizer=tokenizer,
max_seq_length=MAX_SEQ_LENGTH,
batch_size=1)
input_ids, input_type_ids, input_mask, _, subtokens_mask = infer_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
evaluated_tensors = nf.infer(tensors=[punct_logits, capit_logits, subtokens_mask],
checkpoint_dir=WORK_DIR + '/checkpoints')
# helper functions
def concatenate(lists):
return np.concatenate([t.cpu() for t in lists])
punct_ids_to_labels = {punct_label_ids[k]: k for k in punct_label_ids}
capit_ids_to_labels = {capit_label_ids[k]: k for k in capit_label_ids}
punct_logits, capit_logits, subtokens_mask = [concatenate(tensors) for tensors in evaluated_tensors]
punct_preds = np.argmax(punct_logits, axis=2)
capit_preds = np.argmax(capit_logits, axis=2)
for i, query in enumerate(queries):
print(f'Query: {query}')
punct_pred = punct_preds[i][subtokens_mask[i] > 0.5]
capit_pred = capit_preds[i][subtokens_mask[i] > 0.5]
words = query.strip().split()
if len(punct_pred) != len(words) or len(capit_pred) != len(words):
raise ValueError('Pred and words must be of the same length')
output = ''
for j, w in enumerate(words):
punct_label = punct_ids_to_labels[punct_pred[j]]
capit_label = capit_ids_to_labels[capit_pred[j]]
if capit_label != 'O':
w = w.capitalize()
output += w
if punct_label != 'O':
output += punct_label
output += ' '
print(f'Combined: {output.strip()}\n')
###Output
_____no_output_____
###Markdown
Download and preprocess the data In this notebook we're going to use a subset of English examples from the [Tatoeba collection of sentences](https://tatoeba.org/eng), set NUM_SAMPLES=-1 and consider including other datasets to improve the performance of the model. Use [NeMo/examples/nlp/token_classification/get_tatoeba_data.py](https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/token_classification/get_tatoeba_data.py) to download and preprocess the Tatoeba data.
###Code
# This should take about a minute since the data is already downloaded in the previous step
! python get_tatoeba_data.py --data_dir $DATA_DIR --num_sample $NUM_SAMPLES
###Output
_____no_output_____
###Markdown
After the previous step, you should have a `DATA_DIR` folder with the following files:- labels_train.txt- labels_dev.txt- text_train.txt- text_dev.txtThe format of the data described in NeMo docs. Define Neural Modules
###Code
# Instantiate neural factory with supported backend
nf = nemo.core.NeuralModuleFactory(
backend=nemo.core.Backend.PyTorch,
# If you're training with multiple GPUs, you should handle this value with
# something like argparse. See examples/nlp/token_classification.py for an example.
local_rank=None,
# If you're training with mixed precision, this should be set to mxprO1 or mxprO2.
# See https://nvidia.github.io/apex/amp.html#opt-levels for more details.
optimization_level="O1",
# Define path to the directory you want to store your results
log_dir=WORK_DIR,
# If you're training with multiple GPUs, this should be set to
# nemo.core.DeviceType.AllGpu
placement=nemo.core.DeviceType.GPU)
# If you're using a standard BERT model, you should do it like this. To see the full
# list of BERT/ALBERT/RoBERTa model names, call nemo_nlp.nm.trainables.get_bert_models_list()
tokenizer = NemoBertTokenizer(pretrained_model=PRETRAINED_BERT_MODEL)
bert_model = nemo_nlp.nm.trainables.get_huggingface_model(pretrained_model_name=PRETRAINED_BERT_MODEL)
###Output
_____no_output_____
###Markdown
Describe training DAG
###Code
train_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_train.txt'),
label_file=os.path.join(DATA_DIR, 'labels_train.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE)
punct_label_ids = train_data_layer.dataset.punct_label_ids
capit_label_ids = train_data_layer.dataset.capit_label_ids
# Define classifier for Punctuation and Capitalization tasks
punct_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(punct_label_ids),
dropout=CLASSIFICATION_DROPOUT,
num_layers=PUNCT_NUM_FC_LAYERS,
name='Punctuation')
capit_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(capit_label_ids),
dropout=CLASSIFICATION_DROPOUT,
name='Capitalization')
# If you don't want to use weighted loss for Punctuation task, use class_weights=None
punct_label_freqs = train_data_layer.dataset.punct_label_frequencies
class_weights = calc_class_weights(punct_label_freqs)
# define loss
punct_loss = CrossEntropyLossNM(logits_ndim=3, weight=class_weights)
capit_loss = CrossEntropyLossNM(logits_ndim=3)
task_loss = LossAggregatorNM(num_inputs=2)
input_ids, input_type_ids, input_mask, loss_mask, subtokens_mask, punct_labels, capit_labels = train_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
punct_loss = punct_loss(
logits=punct_logits,
labels=punct_labels,
loss_mask=loss_mask)
capit_loss = capit_loss(
logits=capit_logits,
labels=capit_labels,
loss_mask=loss_mask)
task_loss = task_loss(
loss_1=punct_loss,
loss_2=capit_loss)
###Output
_____no_output_____
###Markdown
Describe evaluation DAG
###Code
# Note that you need to specify punct_label_ids and capit_label_ids - mapping form labels to label_ids generated
# during creation of the train_data_layer to make sure that the mapping is correct in case some of the labels from
# the train set are missing in the dev set.
eval_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_dev.txt'),
label_file=os.path.join(DATA_DIR, 'labels_dev.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
punct_label_ids=punct_label_ids,
capit_label_ids=capit_label_ids)
eval_input_ids, eval_input_type_ids, eval_input_mask, _, eval_subtokens_mask, eval_punct_labels, eval_capit_labels\
= eval_data_layer()
hidden_states = bert_model(
input_ids=eval_input_ids,
token_type_ids=eval_input_type_ids,
attention_mask=eval_input_mask)
eval_punct_logits = punct_classifier(hidden_states=hidden_states)
eval_capit_logits = capit_classifier(hidden_states=hidden_states)
###Output
_____no_output_____
###Markdown
Create callbacks
###Code
callback_train = nemo.core.SimpleLossLoggerCallback(
tensors=[task_loss, punct_loss, capit_loss, punct_logits, capit_logits],
print_func=lambda x: logging.info("Loss: {:.3f}".format(x[0].item())),
step_freq=STEP_FREQ)
train_data_size = len(train_data_layer)
# If you're training on multiple GPUs, this should be
# train_data_size / (batch_size * batches_per_step * num_gpus)
steps_per_epoch = int(train_data_size / (BATCHES_PER_STEP * BATCH_SIZE))
print ('Number of steps per epoch: ', steps_per_epoch)
# Callback to evaluate the model
callback_eval = nemo.core.EvaluatorCallback(
eval_tensors=[eval_punct_logits,
eval_capit_logits,
eval_punct_labels,
eval_capit_labels,
eval_subtokens_mask],
user_iter_callback=lambda x, y: eval_iter_callback(x, y),
user_epochs_done_callback=lambda x: eval_epochs_done_callback(x,
punct_label_ids,
capit_label_ids),
eval_step=steps_per_epoch)
# Callback to store checkpoints
ckpt_callback = nemo.core.CheckpointCallback(
folder=nf.checkpoint_dir,
step_freq=STEP_FREQ)
###Output
_____no_output_____
###Markdown
Training
###Code
lr_policy = WarmupAnnealing(NUM_EPOCHS * steps_per_epoch,
warmup_ratio=LR_WARMUP_PROPORTION)
nf.train(tensors_to_optimize=[task_loss],
callbacks=[callback_train, callback_eval, ckpt_callback],
lr_policy=lr_policy,
batches_per_step=BATCHES_PER_STEP,
optimizer=OPTIMIZER,
optimization_params={"num_epochs": NUM_EPOCHS,
"lr": LEARNING_RATE})
###Output
_____no_output_____
###Markdown
10 epochs of training on the subset of data, should take about 20 minutes on a single V100 GPU.The model performance should be similar to the following: precision recall f1-score support O 1.00 0.99 0.99 137268 , 0.58 0.95 0.72 2347 . 0.99 1.00 1.00 19078 ? 0.98 0.99 0.99 1151 accuracy 0.99 159844 macro avg 0.89 0.98 0.92 159844 weighted avg 0.99 0.99 0.99 159844 precision recall f1-score support O 1.00 1.00 1.00 136244 U 1.00 0.99 0.99 23600 accuracy 1.00 159844 macro avg 1.00 1.00 1.00 159844 weighted avg 1.00 1.00 1.00 159844 Inference
###Code
# Define the list of queiries for inference
queries = ['can i help you',
'yes please',
'we bought four shirts from the nvidia gear store in santa clara',
'we bought four shirts one mug and ten thousand titan rtx graphics cards',
'the more you buy the more you save']
infer_data_layer = nemo_nlp.nm.data_layers.BertTokenClassificationInferDataLayer(
queries=queries,
tokenizer=tokenizer,
max_seq_length=MAX_SEQ_LENGTH,
batch_size=1)
input_ids, input_type_ids, input_mask, _, subtokens_mask = infer_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
evaluated_tensors = nf.infer(tensors=[punct_logits, capit_logits, subtokens_mask],
checkpoint_dir=WORK_DIR + '/checkpoints')
# helper functions
def concatenate(lists):
return np.concatenate([t.cpu() for t in lists])
punct_ids_to_labels = {punct_label_ids[k]: k for k in punct_label_ids}
capit_ids_to_labels = {capit_label_ids[k]: k for k in capit_label_ids}
punct_logits, capit_logits, subtokens_mask = [concatenate(tensors) for tensors in evaluated_tensors]
punct_preds = np.argmax(punct_logits, axis=2)
capit_preds = np.argmax(capit_logits, axis=2)
for i, query in enumerate(queries):
print(f'Query: {query}')
punct_pred = punct_preds[i][subtokens_mask[i] > 0.5]
capit_pred = capit_preds[i][subtokens_mask[i] > 0.5]
words = query.strip().split()
if len(punct_pred) != len(words) or len(capit_pred) != len(words):
raise ValueError('Pred and words must be of the same length')
output = ''
for j, w in enumerate(words):
punct_label = punct_ids_to_labels[punct_pred[j]]
capit_label = capit_ids_to_labels[capit_pred[j]]
if capit_label != 'O':
w = w.capitalize()
output += w
if punct_label != 'O':
output += punct_label
output += ' '
print(f'Combined: {output.strip()}\n')
###Output
_____no_output_____
###Markdown
Download and preprocess the data In this notebook we're going to use a subset of English examples from the [Tatoeba collection of sentences](https://tatoeba.org/eng), set NUM_SAMPLES=-1 and consider including other datasets to improve the performance of the model. Use [NeMo/examples/nlp/token_classification/get_tatoeba_data.py](https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/token_classification/get_tatoeba_data.py) to download and preprocess the Tatoeba data.
###Code
# This should take about a minute since the data is already downloaded in the previous step
! python get_tatoeba_data.py --data_dir $DATA_DIR --num_sample $NUM_SAMPLES
###Output
_____no_output_____
###Markdown
After the previous step, you should have a `DATA_DIR` folder with the following files:- labels_train.txt- labels_dev.txt- text_train.txt- text_dev.txtThe format of the data described in NeMo docs. Define Neural Modules
###Code
# Instantiate neural factory with supported backend
nf = nemo.core.NeuralModuleFactory(
backend=nemo.core.Backend.PyTorch,
# If you're training with multiple GPUs, you should handle this value with
# something like argparse. See examples/nlp/token_classification.py for an example.
local_rank=None,
# If you're training with mixed precision, this should be set to mxprO1 or mxprO2.
# See https://nvidia.github.io/apex/amp.html#opt-levels for more details.
optimization_level="O1",
# Define path to the directory you want to store your results
log_dir=WORK_DIR,
# If you're training with multiple GPUs, this should be set to
# nemo.core.DeviceType.AllGpu
placement=nemo.core.DeviceType.GPU)
# If you're using a standard BERT model, you should do it like this. To see the full
# list of BERT model names, check out nemo_nlp.huggingface.BERT.list_pretrained_models()
tokenizer = NemoBertTokenizer(pretrained_model=PRETRAINED_BERT_MODEL)
bert_model = nemo_nlp.nm.trainables.huggingface.BERT(pretrained_model_name=PRETRAINED_BERT_MODEL)
###Output
_____no_output_____
###Markdown
Describe training DAG
###Code
train_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_train.txt'),
label_file=os.path.join(DATA_DIR, 'labels_train.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE)
punct_label_ids = train_data_layer.dataset.punct_label_ids
capit_label_ids = train_data_layer.dataset.capit_label_ids
# Define classifier for Punctuation and Capitalization tasks
punct_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(punct_label_ids),
dropout=CLASSIFICATION_DROPOUT,
num_layers=PUNCT_NUM_FC_LAYERS,
name='Punctuation')
capit_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(capit_label_ids),
dropout=CLASSIFICATION_DROPOUT,
name='Capitalization')
# If you don't want to use weighted loss for Punctuation task, use class_weights=None
punct_label_freqs = train_data_layer.dataset.punct_label_frequencies
class_weights = calc_class_weights(punct_label_freqs)
# define loss
punct_loss = CrossEntropyLossNM(logits_dim=3, weight=class_weights)
capit_loss = CrossEntropyLossNM(logits_dim=3)
task_loss = LossAggregatorNM(num_inputs=2)
input_ids, input_type_ids, input_mask, loss_mask, subtokens_mask, punct_labels, capit_labels = train_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
punct_loss = punct_loss(
logits=punct_logits,
labels=punct_labels,
loss_mask=loss_mask)
capit_loss = capit_loss(
logits=capit_logits,
labels=capit_labels,
loss_mask=loss_mask)
task_loss = task_loss(
loss_1=punct_loss,
loss_2=capit_loss)
###Output
_____no_output_____
###Markdown
Describe evaluation DAG
###Code
# Note that you need to specify punct_label_ids and capit_label_ids - mapping form labels to label_ids generated
# during creation of the train_data_layer to make sure that the mapping is correct in case some of the labels from
# the train set are missing in the dev set.
eval_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_dev.txt'),
label_file=os.path.join(DATA_DIR, 'labels_dev.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
punct_label_ids=punct_label_ids,
capit_label_ids=capit_label_ids)
eval_input_ids, eval_input_type_ids, eval_input_mask, _, eval_subtokens_mask, eval_punct_labels, eval_capit_labels\
= eval_data_layer()
hidden_states = bert_model(
input_ids=eval_input_ids,
token_type_ids=eval_input_type_ids,
attention_mask=eval_input_mask)
eval_punct_logits = punct_classifier(hidden_states=hidden_states)
eval_capit_logits = capit_classifier(hidden_states=hidden_states)
###Output
_____no_output_____
###Markdown
Create callbacks
###Code
callback_train = nemo.core.SimpleLossLoggerCallback(
tensors=[task_loss, punct_loss, capit_loss, punct_logits, capit_logits],
print_func=lambda x: logging.info("Loss: {:.3f}".format(x[0].item())),
step_freq=STEP_FREQ)
train_data_size = len(train_data_layer)
# If you're training on multiple GPUs, this should be
# train_data_size / (batch_size * batches_per_step * num_gpus)
steps_per_epoch = int(train_data_size / (BATCHES_PER_STEP * BATCH_SIZE))
print ('Number of steps per epoch: ', steps_per_epoch)
# Callback to evaluate the model
callback_eval = nemo.core.EvaluatorCallback(
eval_tensors=[eval_punct_logits,
eval_capit_logits,
eval_punct_labels,
eval_capit_labels,
eval_subtokens_mask],
user_iter_callback=lambda x, y: eval_iter_callback(x, y),
user_epochs_done_callback=lambda x: eval_epochs_done_callback(x,
punct_label_ids,
capit_label_ids),
eval_step=steps_per_epoch)
# Callback to store checkpoints
ckpt_callback = nemo.core.CheckpointCallback(
folder=nf.checkpoint_dir,
step_freq=STEP_FREQ)
###Output
_____no_output_____
###Markdown
Training
###Code
lr_policy = WarmupAnnealing(NUM_EPOCHS * steps_per_epoch,
warmup_ratio=LR_WARMUP_PROPORTION)
nf.train(tensors_to_optimize=[task_loss],
callbacks=[callback_train, callback_eval, ckpt_callback],
lr_policy=lr_policy,
batches_per_step=BATCHES_PER_STEP,
optimizer=OPTIMIZER,
optimization_params={"num_epochs": NUM_EPOCHS,
"lr": LEARNING_RATE})
###Output
_____no_output_____
###Markdown
10 epochs of training on the subset of data, should take about 20 minutes on a single V100 GPU.The model performance should be similar to the following: precision recall f1-score support O 1.00 0.99 0.99 137268 , 0.58 0.95 0.72 2347 . 0.99 1.00 1.00 19078 ? 0.98 0.99 0.99 1151 accuracy 0.99 159844 macro avg 0.89 0.98 0.92 159844 weighted avg 0.99 0.99 0.99 159844 precision recall f1-score support O 1.00 1.00 1.00 136244 U 1.00 0.99 0.99 23600 accuracy 1.00 159844 macro avg 1.00 1.00 1.00 159844 weighted avg 1.00 1.00 1.00 159844 Inference
###Code
# Define the list of queiries for inference
queries = ['can i help you',
'yes please',
'we bought four shirts from the nvidia gear store in santa clara',
'we bought four shirts one mug and ten thousand titan rtx graphics cards',
'the more you buy the more you save']
infer_data_layer = nemo_nlp.nm.data_layers.BertTokenClassificationInferDataLayer(
queries=queries,
tokenizer=tokenizer,
max_seq_length=MAX_SEQ_LENGTH,
batch_size=1)
input_ids, input_type_ids, input_mask, _, subtokens_mask = infer_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
evaluated_tensors = nf.infer(tensors=[punct_logits, capit_logits, subtokens_mask],
checkpoint_dir=WORK_DIR + '/checkpoints')
# helper functions
def concatenate(lists):
return np.concatenate([t.cpu() for t in lists])
punct_ids_to_labels = {punct_label_ids[k]: k for k in punct_label_ids}
capit_ids_to_labels = {capit_label_ids[k]: k for k in capit_label_ids}
punct_logits, capit_logits, subtokens_mask = [concatenate(tensors) for tensors in evaluated_tensors]
punct_preds = np.argmax(punct_logits, axis=2)
capit_preds = np.argmax(capit_logits, axis=2)
for i, query in enumerate(queries):
print(f'Query: {query}')
punct_pred = punct_preds[i][subtokens_mask[i] > 0.5]
capit_pred = capit_preds[i][subtokens_mask[i] > 0.5]
words = query.strip().split()
if len(punct_pred) != len(words) or len(capit_pred) != len(words):
raise ValueError('Pred and words must be of the same length')
output = ''
for j, w in enumerate(words):
punct_label = punct_ids_to_labels[punct_pred[j]]
capit_label = capit_ids_to_labels[capit_pred[j]]
if capit_label != 'O':
w = w.capitalize()
output += w
if punct_label != 'O':
output += punct_label
output += ' '
print(f'Combined: {output.strip()}\n')
###Output
_____no_output_____
###Markdown
Download and preprocess the data In this notebook we're going to use a subset of English examples from the [Tatoeba collection of sentences](https://tatoeba.org/eng), set NUM_SAMPLES=-1 and consider including other datasets to improve the performance of the model. Use [NeMo/examples/nlp/token_classification/get_tatoeba_data.py](https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/token_classification/get_tatoeba_data.py) to download and preprocess the Tatoeba data.
###Code
# This should take about a minute since the data is already downloaded in the previous step
! python get_tatoeba_data.py --data_dir $DATA_DIR --num_sample $NUM_SAMPLES
###Output
_____no_output_____
###Markdown
After the previous step, you should have a `DATA_DIR` folder with the following files:- labels_train.txt- labels_dev.txt- text_train.txt- text_dev.txtThe format of the data described in NeMo docs. Define Neural Modules
###Code
# Instantiate neural factory with supported backend
nf = nemo.core.NeuralModuleFactory(
backend=nemo.core.Backend.PyTorch,
# If you're training with multiple GPUs, you should handle this value with
# something like argparse. See examples/nlp/token_classification.py for an example.
local_rank=None,
# If you're training with mixed precision, this should be set to mxprO1 or mxprO2.
# See https://nvidia.github.io/apex/amp.html#opt-levels for more details.
optimization_level="O1",
# Define path to the directory you want to store your results
log_dir=WORK_DIR,
# If you're training with multiple GPUs, this should be set to
# nemo.core.DeviceType.AllGpu
placement=nemo.core.DeviceType.GPU)
# If you're using a standard BERT model, you should do it like this. To see the full
# list of BERT model names, check out nemo_nlp.huggingface.BERT.list_pretrained_models()
tokenizer = NemoBertTokenizer(pretrained_model=PRETRAINED_BERT_MODEL)
bert_model = nemo_nlp.nm.trainables.huggingface.BERT(pretrained_model_name=PRETRAINED_BERT_MODEL)
###Output
_____no_output_____
###Markdown
Describe training DAG
###Code
train_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_train.txt'),
label_file=os.path.join(DATA_DIR, 'labels_train.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE)
punct_label_ids = train_data_layer.dataset.punct_label_ids
capit_label_ids = train_data_layer.dataset.capit_label_ids
# Define classifier for Punctuation and Capitalization tasks
punct_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(punct_label_ids),
dropout=CLASSIFICATION_DROPOUT,
num_layers=PUNCT_NUM_FC_LAYERS,
name='Punctuation')
capit_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(capit_label_ids),
dropout=CLASSIFICATION_DROPOUT,
name='Capitalization')
# If you don't want to use weighted loss for Punctuation task, use class_weights=None
punct_label_freqs = train_data_layer.dataset.punct_label_frequencies
class_weights = calc_class_weights(punct_label_freqs)
# define loss
punct_loss = CrossEntropyLossNM(logits_dim=3, weight=class_weights)
capit_loss = CrossEntropyLossNM(logits_dim=3)
task_loss = LossAggregatorNM(num_inputs=2)
input_ids, input_type_ids, input_mask, loss_mask, subtokens_mask, punct_labels, capit_labels = train_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
punct_loss = punct_loss(
logits=punct_logits,
labels=punct_labels,
loss_mask=loss_mask)
capit_loss = capit_loss(
logits=capit_logits,
labels=capit_labels,
loss_mask=loss_mask)
task_loss = task_loss(
loss_1=punct_loss,
loss_2=capit_loss)
###Output
_____no_output_____
###Markdown
Describe evaluation DAG
###Code
# Note that you need to specify punct_label_ids and capit_label_ids - mapping form labels to label_ids generated
# during creation of the train_data_layer to make sure that the mapping is correct in case some of the labels from
# the train set are missing in the dev set.
eval_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_dev.txt'),
label_file=os.path.join(DATA_DIR, 'labels_dev.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
punct_label_ids=punct_label_ids,
capit_label_ids=capit_label_ids)
eval_input_ids, eval_input_type_ids, eval_input_mask, _, eval_subtokens_mask, eval_punct_labels, eval_capit_labels\
= eval_data_layer()
hidden_states = bert_model(
input_ids=eval_input_ids,
token_type_ids=eval_input_type_ids,
attention_mask=eval_input_mask)
eval_punct_logits = punct_classifier(hidden_states=hidden_states)
eval_capit_logits = capit_classifier(hidden_states=hidden_states)
###Output
_____no_output_____
###Markdown
Create callbacks
###Code
callback_train = nemo.core.SimpleLossLoggerCallback(
tensors=[task_loss, punct_loss, capit_loss, punct_logits, capit_logits],
print_func=lambda x: print("Loss: {:.3f}".format(x[0].item())),
step_freq=STEP_FREQ)
train_data_size = len(train_data_layer)
# If you're training on multiple GPUs, this should be
# train_data_size / (batch_size * batches_per_step * num_gpus)
steps_per_epoch = int(train_data_size / (BATCHES_PER_STEP * BATCH_SIZE))
print ('Number of steps per epoch: ', steps_per_epoch)
# Callback to evaluate the model
callback_eval = nemo.core.EvaluatorCallback(
eval_tensors=[eval_punct_logits,
eval_capit_logits,
eval_punct_labels,
eval_capit_labels,
eval_subtokens_mask],
user_iter_callback=lambda x, y: eval_iter_callback(x, y),
user_epochs_done_callback=lambda x: eval_epochs_done_callback(x,
punct_label_ids,
capit_label_ids),
eval_step=steps_per_epoch)
# Callback to store checkpoints
ckpt_callback = nemo.core.CheckpointCallback(
folder=nf.checkpoint_dir,
step_freq=STEP_FREQ)
###Output
_____no_output_____
###Markdown
Training
###Code
lr_policy = WarmupAnnealing(NUM_EPOCHS * steps_per_epoch,
warmup_ratio=LR_WARMUP_PROPORTION)
nf.train(tensors_to_optimize=[task_loss],
callbacks=[callback_train, callback_eval, ckpt_callback],
lr_policy=lr_policy,
batches_per_step=BATCHES_PER_STEP,
optimizer=OPTIMIZER,
optimization_params={"num_epochs": NUM_EPOCHS,
"lr": LEARNING_RATE})
###Output
_____no_output_____
###Markdown
10 epochs of training on the subset of data, should take about 20 minutes on a single V100 GPU.The model performance should be similar to the following: precision recall f1-score support O 1.00 0.99 0.99 137268 , 0.58 0.95 0.72 2347 . 0.99 1.00 1.00 19078 ? 0.98 0.99 0.99 1151 accuracy 0.99 159844 macro avg 0.89 0.98 0.92 159844 weighted avg 0.99 0.99 0.99 159844 precision recall f1-score support O 1.00 1.00 1.00 136244 U 1.00 0.99 0.99 23600 accuracy 1.00 159844 macro avg 1.00 1.00 1.00 159844 weighted avg 1.00 1.00 1.00 159844 Inference
###Code
# Define the list of queiries for inference
queries = ['can i help you',
'yes please',
'we bought four shirts from the nvidia gear store in santa clara',
'we bought four shirts one mug and ten thousand titan rtx graphics cards',
'the more you buy the more you save']
infer_data_layer = nemo_nlp.nm.data_layers.BertTokenClassificationInferDataLayer(
queries=queries,
tokenizer=tokenizer,
max_seq_length=MAX_SEQ_LENGTH,
batch_size=1)
input_ids, input_type_ids, input_mask, _, subtokens_mask = infer_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
evaluated_tensors = nf.infer(tensors=[punct_logits, capit_logits, subtokens_mask],
checkpoint_dir=WORK_DIR + '/checkpoints')
# helper functions
def concatenate(lists):
return np.concatenate([t.cpu() for t in lists])
punct_ids_to_labels = {punct_label_ids[k]: k for k in punct_label_ids}
capit_ids_to_labels = {capit_label_ids[k]: k for k in capit_label_ids}
punct_logits, capit_logits, subtokens_mask = [concatenate(tensors) for tensors in evaluated_tensors]
punct_preds = np.argmax(punct_logits, axis=2)
capit_preds = np.argmax(capit_logits, axis=2)
for i, query in enumerate(queries):
print(f'Query: {query}')
punct_pred = punct_preds[i][subtokens_mask[i] > 0.5]
capit_pred = capit_preds[i][subtokens_mask[i] > 0.5]
words = query.strip().split()
if len(punct_pred) != len(words) or len(capit_pred) != len(words):
raise ValueError('Pred and words must be of the same length')
output = ''
for j, w in enumerate(words):
punct_label = punct_ids_to_labels[punct_pred[j]]
capit_label = capit_ids_to_labels[capit_pred[j]]
if capit_label != 'O':
w = w.capitalize()
output += w
if punct_label != 'O':
output += punct_label
output += ' '
print(f'Combined: {output.strip()}\n')
###Output
_____no_output_____
###Markdown
Download and preprocess the data In this notebook we're going to use a subset of English examples from the [Tatoeba collection of sentences](https://tatoeba.org/eng), set NUM_SAMPLES=-1 and consider including other datasets to improve the performance of the model. Use [NeMo/examples/nlp/token_classification/get_tatoeba_data.py](https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/token_classification/get_tatoeba_data.py) to download and preprocess the Tatoeba data.
###Code
# This should take about a minute since the data is already downloaded in the previous step
! python get_tatoeba_data.py --data_dir $DATA_DIR --num_sample $NUM_SAMPLES
###Output
_____no_output_____
###Markdown
After the previous step, you should have a `DATA_DIR` folder with the following files:- labels_train.txt- labels_dev.txt- text_train.txt- text_dev.txtThe format of the data described in NeMo docs. Define Neural Modules
###Code
# Instantiate neural factory with supported backend
nf = nemo.core.NeuralModuleFactory(
backend=nemo.core.Backend.PyTorch,
# If you're training with multiple GPUs, you should handle this value with
# something like argparse. See examples/nlp/token_classification.py for an example.
local_rank=None,
# If you're training with mixed precision, this should be set to mxprO1 or mxprO2.
# See https://nvidia.github.io/apex/amp.html#opt-levels for more details.
optimization_level="O1",
# Define path to the directory you want to store your results
log_dir=WORK_DIR,
# If you're training with multiple GPUs, this should be set to
# nemo.core.DeviceType.AllGpu
placement=nemo.core.DeviceType.GPU)
# If you're using a standard BERT model, you should do it like this. To see the full
# list of MegatronBERT/BERT/ALBERT/RoBERTa model names, call nemo_nlp.nm.trainables.get_pretrained_lm_models_list()
bert_model = nemo_nlp.nm.trainables.get_pretrained_lm_model(
pretrained_model_name=PRETRAINED_BERT_MODEL)
tokenizer = nemo.collections.nlp.data.tokenizers.get_tokenizer(
tokenizer_name="nemobert",
pretrained_model_name=PRETRAINED_BERT_MODEL)
###Output
_____no_output_____
###Markdown
Describe training DAG
###Code
train_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_train.txt'),
label_file=os.path.join(DATA_DIR, 'labels_train.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE)
punct_label_ids = train_data_layer.dataset.punct_label_ids
capit_label_ids = train_data_layer.dataset.capit_label_ids
# Define classifier for Punctuation and Capitalization tasks
punct_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(punct_label_ids),
dropout=CLASSIFICATION_DROPOUT,
num_layers=PUNCT_NUM_FC_LAYERS,
name='Punctuation')
capit_classifier = TokenClassifier(
hidden_size=bert_model.hidden_size,
num_classes=len(capit_label_ids),
dropout=CLASSIFICATION_DROPOUT,
name='Capitalization')
# If you don't want to use weighted loss for Punctuation task, use class_weights=None
punct_label_freqs = train_data_layer.dataset.punct_label_frequencies
class_weights = calc_class_weights(punct_label_freqs)
# define loss
punct_loss = CrossEntropyLossNM(logits_ndim=3, weight=class_weights)
capit_loss = CrossEntropyLossNM(logits_ndim=3)
task_loss = LossAggregatorNM(num_inputs=2)
input_ids, input_type_ids, input_mask, loss_mask, subtokens_mask, punct_labels, capit_labels = train_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
punct_loss = punct_loss(
logits=punct_logits,
labels=punct_labels,
loss_mask=loss_mask)
capit_loss = capit_loss(
logits=capit_logits,
labels=capit_labels,
loss_mask=loss_mask)
task_loss = task_loss(
loss_1=punct_loss,
loss_2=capit_loss)
###Output
_____no_output_____
###Markdown
Describe evaluation DAG
###Code
# Note that you need to specify punct_label_ids and capit_label_ids - mapping form labels to label_ids generated
# during creation of the train_data_layer to make sure that the mapping is correct in case some of the labels from
# the train set are missing in the dev set.
eval_data_layer = nemo_nlp.nm.data_layers.PunctuationCapitalizationDataLayer(
tokenizer=tokenizer,
text_file=os.path.join(DATA_DIR, 'text_dev.txt'),
label_file=os.path.join(DATA_DIR, 'labels_dev.txt'),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
punct_label_ids=punct_label_ids,
capit_label_ids=capit_label_ids)
eval_input_ids, eval_input_type_ids, eval_input_mask, _, eval_subtokens_mask, eval_punct_labels, eval_capit_labels\
= eval_data_layer()
hidden_states = bert_model(
input_ids=eval_input_ids,
token_type_ids=eval_input_type_ids,
attention_mask=eval_input_mask)
eval_punct_logits = punct_classifier(hidden_states=hidden_states)
eval_capit_logits = capit_classifier(hidden_states=hidden_states)
###Output
_____no_output_____
###Markdown
Create callbacks
###Code
callback_train = nemo.core.SimpleLossLoggerCallback(
tensors=[task_loss, punct_loss, capit_loss, punct_logits, capit_logits],
print_func=lambda x: logging.info("Loss: {:.3f}".format(x[0].item())),
step_freq=STEP_FREQ)
train_data_size = len(train_data_layer)
# If you're training on multiple GPUs, this should be
# train_data_size / (batch_size * batches_per_step * num_gpus)
steps_per_epoch = int(train_data_size / (BATCHES_PER_STEP * BATCH_SIZE))
print ('Number of steps per epoch: ', steps_per_epoch)
# Callback to evaluate the model
callback_eval = nemo.core.EvaluatorCallback(
eval_tensors=[eval_punct_logits,
eval_capit_logits,
eval_punct_labels,
eval_capit_labels,
eval_subtokens_mask],
user_iter_callback=lambda x, y: eval_iter_callback(x, y),
user_epochs_done_callback=lambda x: eval_epochs_done_callback(x,
punct_label_ids,
capit_label_ids),
eval_step=steps_per_epoch)
# Callback to store checkpoints
ckpt_callback = nemo.core.CheckpointCallback(
folder=nf.checkpoint_dir,
step_freq=STEP_FREQ)
###Output
_____no_output_____
###Markdown
Training
###Code
lr_policy = WarmupAnnealing(NUM_EPOCHS * steps_per_epoch,
warmup_ratio=LR_WARMUP_PROPORTION)
nf.train(tensors_to_optimize=[task_loss],
callbacks=[callback_train, callback_eval, ckpt_callback],
lr_policy=lr_policy,
batches_per_step=BATCHES_PER_STEP,
optimizer=OPTIMIZER,
optimization_params={"num_epochs": NUM_EPOCHS,
"lr": LEARNING_RATE})
###Output
_____no_output_____
###Markdown
10 epochs of training on the subset of data, should take about 20 minutes on a single V100 GPU.The model performance should be similar to the following: precision recall f1-score support O 1.00 0.99 0.99 137268 , 0.58 0.95 0.72 2347 . 0.99 1.00 1.00 19078 ? 0.98 0.99 0.99 1151 accuracy 0.99 159844 macro avg 0.89 0.98 0.92 159844 weighted avg 0.99 0.99 0.99 159844 precision recall f1-score support O 1.00 1.00 1.00 136244 U 1.00 0.99 0.99 23600 accuracy 1.00 159844 macro avg 1.00 1.00 1.00 159844 weighted avg 1.00 1.00 1.00 159844 Inference
###Code
# Define the list of queiries for inference
queries = ['can i help you',
'yes please',
'we bought four shirts from the nvidia gear store in santa clara',
'we bought four shirts one mug and ten thousand titan rtx graphics cards',
'the more you buy the more you save']
infer_data_layer = nemo_nlp.nm.data_layers.BertTokenClassificationInferDataLayer(
queries=queries,
tokenizer=tokenizer,
max_seq_length=MAX_SEQ_LENGTH,
batch_size=1)
input_ids, input_type_ids, input_mask, _, subtokens_mask = infer_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
punct_logits = punct_classifier(hidden_states=hidden_states)
capit_logits = capit_classifier(hidden_states=hidden_states)
evaluated_tensors = nf.infer(tensors=[punct_logits, capit_logits, subtokens_mask],
checkpoint_dir=WORK_DIR + '/checkpoints')
# helper functions
def concatenate(lists):
return np.concatenate([t.cpu() for t in lists])
punct_ids_to_labels = {punct_label_ids[k]: k for k in punct_label_ids}
capit_ids_to_labels = {capit_label_ids[k]: k for k in capit_label_ids}
punct_logits, capit_logits, subtokens_mask = [concatenate(tensors) for tensors in evaluated_tensors]
punct_preds = np.argmax(punct_logits, axis=2)
capit_preds = np.argmax(capit_logits, axis=2)
for i, query in enumerate(queries):
print(f'Query: {query}')
punct_pred = punct_preds[i][subtokens_mask[i] > 0.5]
capit_pred = capit_preds[i][subtokens_mask[i] > 0.5]
words = query.strip().split()
if len(punct_pred) != len(words) or len(capit_pred) != len(words):
raise ValueError('Pred and words must be of the same length')
output = ''
for j, w in enumerate(words):
punct_label = punct_ids_to_labels[punct_pred[j]]
capit_label = capit_ids_to_labels[capit_pred[j]]
if capit_label != 'O':
w = w.capitalize()
output += w
if punct_label != 'O':
output += punct_label
output += ' '
print(f'Combined: {output.strip()}\n')
###Output
_____no_output_____ |
Trees Solved/Tree Representation Implementation (Nodes and References).ipynb | ###Markdown
Nodes and References Implementation of a TreeIn this notebook is the code corresponding to the lecture for implementing the representation of a Tree as a class with nodes and references!
###Code
class BinaryTree(object):
def __init__(self, obj):
self.key = obj
self.left_node = None
self.right_node = None
def insertLeft(self, newObj):
t = BinaryTree(newObj)
if not self.left_node:
self.left_node = t
else:
t.left_node = self.left_node
self.left_node = t
def insertRight(self, newObj):
t = BinaryTree(newObj)
if not self.right_node:
self.right_node = t
else:
t.right_node = self.right_node
self.right_node = t
def getRightChild(self):
return self.right_node
def getLeftChild(self):
return self.left_node
def setRootVal(self,obj):
self.key = obj
def getRootVal(self):
return self.key
###Output
_____no_output_____
###Markdown
We can see some examples of creating a tree and assigning children. Note that some outputs are Trees themselves!
###Code
from __future__ import print_function
r = BinaryTree('a')
print(r.getRootVal())
print(r.getLeftChild())
r.insertLeft('b')
print(r.getLeftChild())
print(r.getLeftChild().getRootVal())
r.insertRight('c')
print(r.getRightChild())
print(r.getRightChild().getRootVal())
r.getRightChild().setRootVal('hello')
print(r.getRightChild().getRootVal())
###Output
a
None
<__main__.BinaryTree object at 0x10d3feb50>
b
<__main__.BinaryTree object at 0x10d3fe220>
c
hello
|
05_Merge/Auto_MPG/Exercises.ipynb | ###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv")
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv")
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.head()
cars1.dropna(how="all", axis="columns", inplace=True)
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
display(cars1.shape)
cars2.shape
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2])
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
import numpy as np
cars["owners"] = np.random.randint(15000, 73000, size=len(cars))
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1 = pd.read_csv(url1)
cars2 = pd.read_csv(url2)
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:,:'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.info()
cars2.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 9 columns):
mpg 200 non-null float64
cylinders 200 non-null int64
displacement 200 non-null int64
horsepower 200 non-null object
weight 200 non-null int64
acceleration 200 non-null float64
model 200 non-null int64
origin 200 non-null int64
car 200 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.1+ KB
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 398 entries, 0 to 199
Data columns (total 9 columns):
mpg 398 non-null float64
cylinders 398 non-null int64
displacement 398 non-null int64
horsepower 398 non-null object
weight 398 non-null int64
acceleration 398 non-null float64
model 398 non-null int64
origin 398 non-null int64
car 398 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 31.1+ KB
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(15000, high=73001, size=398, dtype='l')
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv")
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv")
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1[["mpg", "cylinders", "displacement", 'horsepower', "weight", "acceleration", "model", "origin", "car"]]
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(f"Number of observations in cars1 is {cars1.shape[0]}")
print(f"Number of observations in cars2 is {cars2.shape[0]}")
###Output
Number of observations in cars1 is 198
Number of observations in cars2 is 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2])
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = pd.Series(np.random.randint(low= 15000, high=73001, size=(len(cars))))
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars["owner"] = owners
cars.tail()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1
cars1 = cars1.loc[:,'mpg':'car']
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[0]
cars1.head()
cars2.shape[0]
cars2.head()
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars =cars1.append(cars2)
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000,73000, size = 398, dtype='I')
len(owners)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners']=owners
cars
cars.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 398 entries, 0 to 199
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 398 non-null float64
1 cylinders 398 non-null int64
2 displacement 398 non-null int64
3 horsepower 398 non-null object
4 weight 398 non-null int64
5 acceleration 398 non-null float64
6 model 398 non-null int64
7 origin 398 non-null int64
8 car 398 non-null object
9 owners 398 non-null uint32
dtypes: float64(2), int64(5), object(2), uint32(1)
memory usage: 32.6+ KB
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
cars1
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.dropna(axis=1)
cars2
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.info()
cars2.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 9 columns):
mpg 200 non-null float64
cylinders 200 non-null int64
displacement 200 non-null int64
horsepower 200 non-null object
weight 200 non-null int64
acceleration 200 non-null float64
model 200 non-null int64
origin 200 non-null int64
car 200 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.1+ KB
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.info
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:,:'car']
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0])
print(cars2.shape[0])
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
pd.concat([cars1, cars2])
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000, 73000, len(cars))
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars.tail()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars1.head()
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, "mpg":"car"]
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0])
print(cars2.shape[0])
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000, high=73001, size=398, dtype='l')
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars.tail()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1 = pd.read_csv(url1)
cars2 = pd.read_csv(url2)
cars1.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.columns
cars1.drop(cars1.columns[9:14], axis = 1, inplace = True) #axis = 1 to delete columnwise, 0 to delete row-wise
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[0]
cars2.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2) #append is useful to add rows: cars1 rows first cars2 rows after
cars.shape
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(15000,73001, size= 398, dtype = 'l') #creates an array of size 398, low = 15000,
#high = 73001 =>73000, datatype = long
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1 = pd.read_csv(url1)
cars2 = pd.read_csv(url2)
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:,'mpg':'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[0], cars2.shape[0]
###Output
_____no_output_____
###Markdown
==Step 6. Join cars1 and cars2 into a single DataFrame called cars==
###Code
cars = cars1.append(cars2)
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(low = 15000, high = 73000 + 1, size = cars.shape[0])
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('./cars1.csv')
cars2 = pd.read_csv('./cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:,'mpg':'car']
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(f"Dataset1: {len(cars1)}", f"Dataset2: {len(cars2)}", sep='\n')
###Output
Dataset1: 198
Dataset2: 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2])
len(cars)
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000, 73000, len(cars))
len(owners)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars.columns
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
link1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
link2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1 = pd.read_csv(link1)
cars2 = pd.read_csv(link2)
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, :'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
print(cars.shape)
###Output
(398, 9)
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
cars['owners'] = np.random.randint(15000, high=73001, size=398, dtype='l')
cars['owners']
np.random.randint?
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
cars1 = pd.read_csv(url1)
cars1
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars2 = pd.read_csv(url2)
cars2
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, 'mpg':'car']
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape
#me indica que hay 198 filas por 9 columnas
cars2.shape
#me indica que hay 200 filas por 9 columnas
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
#El append()método agrega un elemento al final de la otra lista.
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
#random.randint(low, high=None, size=None, dtype=int)
#Devuelve enteros aleatorios desde "low" (incuyéndolo) a "high" (sin incluirlo).
nr_owners = np.random.randint(low=15000, high=73001, size=398, dtype='l')
nr_owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners']= nr_owners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("cars1.csv")
cars2 = pd.read_csv("cars2.csv")
cars1.info()
cars2.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 200 non-null float64
1 cylinders 200 non-null int64
2 displacement 200 non-null int64
3 horsepower 200 non-null object
4 weight 200 non-null int64
5 acceleration 200 non-null float64
6 model 200 non-null int64
7 origin 200 non-null int64
8 car 200 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.2+ KB
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.dropna(axis=1)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.info()
cars2.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 198 entries, 0 to 197
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 198 non-null float64
1 cylinders 198 non-null int64
2 displacement 198 non-null int64
3 horsepower 198 non-null object
4 weight 198 non-null int64
5 acceleration 198 non-null float64
6 model 198 non-null int64
7 origin 198 non-null int64
8 car 198 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.0+ KB
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 200 non-null float64
1 cylinders 200 non-null int64
2 displacement 200 non-null int64
3 horsepower 200 non-null object
4 weight 200 non-null int64
5 acceleration 200 non-null float64
6 model 200 non-null int64
7 origin 200 non-null int64
8 car 200 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.2+ KB
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat?
cars = pd.concat([cars1, cars2])
cars.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 398 entries, 0 to 199
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 398 non-null float64
1 cylinders 398 non-null int64
2 displacement 398 non-null int64
3 horsepower 398 non-null object
4 weight 398 non-null int64
5 acceleration 398 non-null float64
6 model 398 non-null int64
7 origin 398 non-null int64
8 car 398 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 31.1+ KB
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
cars.index.size
owners = np.random.randint(low=15000, high=73001, size=cars.index.size, dtype='l')
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars["owners"] = owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv")
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv")
print(cars1.head())
print(cars2.head())
###Output
mpg cylinders displacement horsepower weight acceleration model \
0 18.0 8 307 130 3504 12.0 70
1 15.0 8 350 165 3693 11.5 70
2 18.0 8 318 150 3436 11.0 70
3 16.0 8 304 150 3433 12.0 70
4 17.0 8 302 140 3449 10.5 70
origin car Unnamed: 9 Unnamed: 10 Unnamed: 11 \
0 1 chevrolet chevelle malibu NaN NaN NaN
1 1 buick skylark 320 NaN NaN NaN
2 1 plymouth satellite NaN NaN NaN
3 1 amc rebel sst NaN NaN NaN
4 1 ford torino NaN NaN NaN
Unnamed: 12 Unnamed: 13
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
mpg cylinders displacement horsepower weight acceleration model \
0 33.0 4 91 53 1795 17.4 76
1 20.0 6 225 100 3651 17.7 76
2 18.0 6 250 78 3574 21.0 76
3 18.5 6 250 110 3645 16.2 76
4 17.5 6 258 95 3193 17.8 76
origin car
0 3 honda civic
1 1 dodge aspen se
2 1 ford granada ghia
3 1 pontiac ventura sj
4 1 amc pacer d/l
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[: , "mpg":"car"]
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print("cars1 number of rows: ", str(cars1.shape[0]))
print("cars2 number of rows: ", str(cars2.shape[0]))
###Output
cars1 number of rows: 198
cars2 number of rows: 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
print("cars number of rows: ", str(cars.shape[0]))
###Output
cars number of rows: 398
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(15000, high=73001, size=398, dtype='l')
print(nr_owners)
nr_owners = np.random.randint(15000, #lowest number
high = 73001, #highest number (plus 1)
size = 398, #how many rows we need it for
dtype = 'l')
###Output
[67311 40013 55475 23398 23268 20709 29520 38198 39726 30130 40696 71113
53620 42413 34265 16341 69799 67507 42633 63457 34692 69447 39801 23940
59224 35894 35815 57575 33116 17849 33333 50637 31758 62861 21443 57695
62836 34820 23567 35036 68626 61110 42098 56319 59645 45777 26033 31170
47161 56410 62463 52089 61636 50213 20424 68269 45932 19561 20129 60319
60697 24583 62423 29225 46068 39418 26571 43280 57400 16375 34826 29783
47422 48955 41005 39780 65863 46494 59815 45298 62039 34767 47763 51639
39525 52782 48338 40213 30821 20446 41104 18606 27185 17951 21689 21837
27373 58957 68354 62834 27108 64266 35062 47932 42410 42199 20701 37671
59476 59661 24566 71958 70855 52205 50661 52385 51422 71424 17355 16123
57967 56288 46332 20409 24775 49850 43683 72144 29119 57190 53302 70188
49491 69521 39611 68610 51545 18097 32084 24602 41864 48253 17763 51905
70814 19883 54716 34594 48302 66786 66003 43886 64363 64708 53520 44260
17792 16097 48927 39517 28258 63333 39662 48159 49474 51634 43160 17467
62327 41190 70988 34884 53710 30171 64771 54168 25191 48680 50254 36613
55299 53500 23851 34000 15891 23742 61946 32657 39482 54568 52262 70782
70055 33323 50761 25458 48139 19298 45188 54861 24586 42361 37909 29105
41052 46123 70244 34539 65864 63683 64334 34696 35146 21819 63322 23863
43649 52518 34597 35718 49877 35348 20506 58098 59108 32605 19060 17761
69650 31509 20800 16647 36897 27712 49919 24285 63888 23617 66269 54340
26489 20862 48832 54184 16575 34379 31519 62990 53049 35217 23897 59835
35562 20257 46083 61204 31231 57324 61356 55575 50022 68185 52334 65227
43164 31623 24891 48449 46720 16285 25486 62110 15822 23662 21325 33272
28467 37736 47202 32997 57702 31587 35759 25729 60129 35705 23968 65481
55483 30403 55213 38349 61765 61356 34588 25752 46457 35352 15088 59989
72856 33317 67275 48471 17093 23578 30287 43462 18463 69146 68900 67462
26569 54675 20469 62672 35793 58552 57591 59053 29475 58989 48934 51564
45342 18773 52738 70131 68653 51646 33392 47832 48489 15671 47363 55476
34926 51549 57567 44381 69446 69697 66623 43478 44600 37899 46456 36675
48666 58048 35229 28050 66271 64310 45527 18599 19033 58292 19894 44718
44999 31434 44744 31992 40345 63139 53573 32227 37804 47475 38969 67392
62754 68179 60510 38926 42519 21843 25230 32603 42207 19213 43493 58261
48998 67402 45085 45557 40960 65996 51860 32310 29832 28981 54098 68855
27486 47630]
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
df= pd.read_csv(url1, sep = ',')
df2= pd.read_csv(url2, sep = ',')
df.head(3)
df2.head(3)
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
df = df.iloc[:, 0:9]
df.head(3)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
df.info()
df2.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 200 non-null float64
1 cylinders 200 non-null int64
2 displacement 200 non-null int64
3 horsepower 200 non-null object
4 weight 200 non-null int64
5 acceleration 200 non-null float64
6 model 200 non-null int64
7 origin 200 non-null int64
8 car 200 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.2+ KB
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
coches = df.append(df2)
coches
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
coches['owners'] = np.random.randint(15000, 73000, coches.shape[0])
coches.head(2)
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import random
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = pd.read_csv('cars1.csv')
_ = cars1.columns.str.contains('Unnamed:')
# Solution 1
cars1 = cars1.loc[:,~_] # 741 µs ± 79.1 µs
cars1 = pd.read_csv('cars1.csv')
_ = cars1.columns.str.contains('Unnamed:')
# Solution 2
cars1.drop(columns=cars1.columns[_]) # 995 µs ± 17.1 µs
cars1 = pd.read_csv('cars1.csv')
_ = cars1.columns.str.contains('Unnamed:')
# Solution 3
for c in cars1.columns[_]: del cars1[c] # 1.560 µs
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape, cars2.shape
cars1.head(2)
cars2.head(2)
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
# Solution 1
cars = pd.concat([cars1, cars2])
# Solution 2
cars = cars1.append(cars2)
cars.reset_index(inplace=True)
cars.head()
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
ser = pd.Series(np.random.randint(15000, 73000, cars.shape[0]), index=cars.index)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = ser
cars.head(2)
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import random
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1=pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2=pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1=cars1.loc[:,'mpg':'car']
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars=cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
cars['owners']=np.random.randint(15000,73000,size=cars.shape[0])
cars
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
#already completed
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv")
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv")
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:,:"car"]
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0], cars2.shape[0])
###Output
198 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
pd.concat([cars1, cars2], axis=0)
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, 'mpg':'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
cars2.shape
###Output
(198, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000, 73001, size=398)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.drop(cars1.columns[cars1.columns.str.startswith('Unnamed:')], axis = 1)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(len(cars1))
print(len(cars2))
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = pd.Series(np.random.randint(15000, 73000, len(cars)), name = 'owners')
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, 'mpg':'car']
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(15000, high=73001, size=398, dtype='l')
nr_owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
cars1.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.drop(cars1.columns[9:], axis=1)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 198 entries, 0 to 197
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 198 non-null float64
1 cylinders 198 non-null int64
2 displacement 198 non-null int64
3 horsepower 198 non-null object
4 weight 198 non-null int64
5 acceleration 198 non-null float64
6 model 198 non-null int64
7 origin 198 non-null int64
8 car 198 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.0+ KB
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1,cars2])
cars.head()
cars1.append(cars2)
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
import numpy as np
cars.shape
owners = np.random.randint(low=15000, high=73000, size=398)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
pd.__version__
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.dropna(axis='columns')
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape, cars2.shape)
###Output
(198, 9) (200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2], ignore_index=True)
cars.index
###Output
_____no_output_____
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nrowners = np.random.randint(15000, 73000, cars.shape[0])
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nrowners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
cars1.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:,'mpg':'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[0]
cars2.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1,cars2],axis=0)
cars = cars.reset_index()
cars.head()
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000,73000,cars.shape[0])
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners']=owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv').iloc[:,:9]
cars2 = pd.read_csv('cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape,cars2.shape
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1,cars2])
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
len(cars)
from numpy import random
owner = random.randint(15000,73000,398)
owner
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owner'] = owner
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1_url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
cars2_url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1, cars2 = pd.read_csv(cars1_url), pd.read_csv(cars2_url)
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.dropna(axis=1)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[0], cars2.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.head()
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
#pd.Series(np.repeat(np.random.randint(15_000, 73_000),cars.shape[1]))
rand_len = cars.shape[0]
owners = pd.Series(np.random.randint(15_000,73_000,size=rand_len))
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:,'mpg':'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1,cars2])
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000,73000,len(cars))
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import random
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv).
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv")
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv")
###Output
_____no_output_____
###Markdown
Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
desired_col_names = list(cars2.columns)
cars1 = cars1[desired_col_names]
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[1],cars2.shape[1]
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.merge(cars1,cars2,how='outer')
# Alternatives:
#cars = cars1.append(cars2)
#cars = pd.concat([cars1, cars2])
cars.shape
###Output
_____no_output_____
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
series = random.sample(range(15000,73000),cars.shape[0])
cars['owners'] = series
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url_1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url_2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1 = pd.read_csv(url_1)
cars2 = pd.read_csv(url_2)
cars1
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
if 'Unnamed: 9' in cars1:
cars1 = cars1.drop(columns=['Unnamed: 9', 'Unnamed: 10', 'Unnamed: 11', 'Unnamed: 12', 'Unnamed: 13'])
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
display(cars1, cars2)
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called carsJoin is where we take a key column from table1, and a key column from table2, and we take the key columns and match the dataset.This is not what we want with our dataframe - ours has the same columns, and different rows.We want to use concat, but need to fiddle with the indexes.
###Code
cars = cars2.append(cars1)
cars
cars = pd.concat([cars1, cars2], ignore_index=True)
cars
cars.reset_index()
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000, 73000, 398)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv")
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv")
print(cars1.head())
print(cars2.head())
###Output
mpg cylinders displacement horsepower weight acceleration model \
0 18.0 8 307 130 3504 12.0 70
1 15.0 8 350 165 3693 11.5 70
2 18.0 8 318 150 3436 11.0 70
3 16.0 8 304 150 3433 12.0 70
4 17.0 8 302 140 3449 10.5 70
origin car Unnamed: 9 Unnamed: 10 Unnamed: 11 \
0 1 chevrolet chevelle malibu NaN NaN NaN
1 1 buick skylark 320 NaN NaN NaN
2 1 plymouth satellite NaN NaN NaN
3 1 amc rebel sst NaN NaN NaN
4 1 ford torino NaN NaN NaN
Unnamed: 12 Unnamed: 13
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
mpg cylinders displacement horsepower weight acceleration model \
0 33.0 4 91 53 1795 17.4 76
1 20.0 6 225 100 3651 17.7 76
2 18.0 6 250 78 3574 21.0 76
3 18.5 6 250 110 3645 16.2 76
4 17.5 6 258 95 3193 17.8 76
origin car
0 3 honda civic
1 1 dodge aspen se
2 1 ford granada ghia
3 1 pontiac ventura sj
4 1 amc pacer d/l
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, 'mpg':'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0])
print(cars2.shape[0])
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.head()
###Output
_____no_output_____
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(15000, high=73001, size=398, dtype='l')
nr_owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv', sep=',')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv', sep=',')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.iloc[:,:9]
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2], keys=['cars1', 'cars2'])
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_numbers = np.random.randint(15000, high=73000, size=398, dtype='l')
nr_numbers
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_numbers
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv).
###Code
url1 = r'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = r'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
data1 = pd.read_csv(url1)
data2 = pd.read_csv(url2)
###Output
_____no_output_____
###Markdown
Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.DataFrame(data1)
cars2 = pd.DataFrame(data2)
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, "mpg":"car"]
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.info()
cars2.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 200 non-null float64
1 cylinders 200 non-null int64
2 displacement 200 non-null int64
3 horsepower 200 non-null object
4 weight 200 non-null int64
5 acceleration 200 non-null float64
6 model 200 non-null int64
7 origin 200 non-null int64
8 car 200 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.2+ KB
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2], sort=False)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(15000, high=73001, size=cars.shape[0], dtype='l')
nr_owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_owners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('./cars1.csv')
cars2 = pd.read_csv('./cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:,'mpg':'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0])
print(cars2.shape[0])
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(15000, high=73001, size=cars.shape[0], dtype='l')
nr_owners.dtype
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_owners
cars.tail()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1 = pd.read_csv(url1)
cars2 = pd.read_csv(url2)
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cols = [9,10,11,12,13]
cars1.drop(cars1.columns[cols],axis=1,inplace=True)
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape
cars2.shape
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1,cars2])
cars.head()
cars.shape
###Output
_____no_output_____
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
cars['owners'] = np.random.randint(15000, 73000, cars.shape[0])
cars.head()
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars.tail()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.drop(columns=['Unnamed: 9', 'Unnamed: 10','Unnamed: 11','Unnamed: 12','Unnamed: 13'])
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
v, w = cars1.shape
v
x, y = cars2.shape
x
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.merge(cars2, how='outer')
cars
###Output
_____no_output_____
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
import numpy as np
data = np.random.randint(15000, high=73001, size=398, dtype='l')
cars['owners'] = data
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
cars1.head(5)
cars2.head(5)
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, 'mpg':'car']
cars1.head(5)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
display(cars1.shape)
print(cars2.shape)
cars2.tail(5)
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.head(5)
cars.tail(5)
#cars.shape
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(low = 15000, high = 73001, size = 398, dtype='l') # dtype = long so represented as 'l' , if it is int then we have r=to represent as 'i'
nr_owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_owners
cars.tail(5)
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, 'mpg':'car']
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
c1l = cars1.shape[0]
c2l = cars2.shape[0]
print(c1l, c2l)
###Output
198 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars1 = cars1.append(cars2)
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = pd.Series(np.random.randint(low=15000, high=73001, size=len(cars1)))
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars1['owners'] = owners
cars1
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1 = pd.read_csv(url1)
cars2 = pd.read_csv(url2)
display(cars1.head())
display(cars2.head())
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.dropna(axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0])
print(cars2.shape[0])
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars1.info()
cars2.info()
cars = cars1.append(cars2)
###Output
_____no_output_____
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
rand_owners = np.random.randint(low=15000, high=73000, size=cars.index.size, dtype='l')
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = rand_owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.iloc[:,0:9]
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0],cars2.shape[0])
###Output
198 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1,cars2],axis='index')
cars.shape
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
cars['owners']= np.random.randint(size=len(cars),low=15000,high=73000)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = './cars1.csv'
url2 = './cars2.csv'
cars1 = pd.read_csv(url1)
cars2 = pd.read_csv(url2)
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, "mpg":"car"]
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2])
cars
cars.head()
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = pd.Series(data=np.random.randint(low=15000, high=73000, size=398), name='owners')
owners = owners.to_frame()
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv",',')
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv",',')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.head()
cars1 = cars1.loc[:,"mpg":'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[0]
cars2.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
random_number = np.random.randint(15000, high=73001, size=398, dtype='l')
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = random_number
cars.tail()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("cars1.csv")
cars2 = pd.read_csv("cars2.csv")
print(cars1.head())
cars2.head()
###Output
mpg cylinders displacement horsepower weight acceleration model \
0 18.0 8 307 130 3504 12.0 70
1 15.0 8 350 165 3693 11.5 70
2 18.0 8 318 150 3436 11.0 70
3 16.0 8 304 150 3433 12.0 70
4 17.0 8 302 140 3449 10.5 70
origin car Unnamed: 9 Unnamed: 10 Unnamed: 11 \
0 1 chevrolet chevelle malibu NaN NaN NaN
1 1 buick skylark 320 NaN NaN NaN
2 1 plymouth satellite NaN NaN NaN
3 1 amc rebel sst NaN NaN NaN
4 1 ford torino NaN NaN NaN
Unnamed: 12 Unnamed: 13
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.dropna(axis=1, inplace=True)
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(f"Cars1 has {cars1.shape[0]} obs, cars2 has {cars2.shape[0]} obs")
###Output
Cars1 has 198 obs, cars2 has 200 obs
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.merge(cars1, cars2, how='outer')
cars.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 398 entries, 0 to 397
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 398 non-null float64
1 cylinders 398 non-null int64
2 displacement 398 non-null int64
3 horsepower 398 non-null object
4 weight 398 non-null int64
5 acceleration 398 non-null float64
6 model 398 non-null int64
7 origin 398 non-null int64
8 car 398 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 31.1+ KB
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
import numpy as np
low = 15000
high = 73000
n = cars.shape[0]
owners = pd.Series(np.random.randint(low, high=high, size=n))
owners.head()
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("cars1.csv")
cars2 = pd.read_csv("cars2.csv")
cars2
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, ~cars1.columns.str.startswith('Unnamed')]
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0])
print(cars2.shape[0])
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2])
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
import numpy as np
cars["ownders"] = np.random.randint(low=15000, high=73000, size=len(cars))
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
car1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
car2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
car1.head()
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
car1.drop(['Unnamed: 9','Unnamed: 10','Unnamed: 11','Unnamed: 12','Unnamed: 13'],axis =1).head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(car1.shape[0])
print(car2.shape[0])
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.merge(car1,car2,how = 'outer')
cars.shape
car1.merge(car2,how='outer')
cars = car1.append(car2)
cars.shape
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:,'mpg':'car']
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape
cars2.shape
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.shape[0]
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
cars['owners'] = np.random.randint(15000, 73001, size=398)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, 'mpg':'car']
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[0]
cars2.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2])
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000, 73000, size=398)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1 = pd.read_csv(url1)
cars2 = pd.read_csv(url2)
print(cars1.head())
print(cars2.head())
###Output
mpg cylinders displacement horsepower weight acceleration model \
0 18.0 8 307 130 3504 12.0 70
1 15.0 8 350 165 3693 11.5 70
2 18.0 8 318 150 3436 11.0 70
3 16.0 8 304 150 3433 12.0 70
4 17.0 8 302 140 3449 10.5 70
origin car Unnamed: 9 Unnamed: 10 Unnamed: 11 \
0 1 chevrolet chevelle malibu NaN NaN NaN
1 1 buick skylark 320 NaN NaN NaN
2 1 plymouth satellite NaN NaN NaN
3 1 amc rebel sst NaN NaN NaN
4 1 ford torino NaN NaN NaN
Unnamed: 12 Unnamed: 13
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
mpg cylinders displacement horsepower weight acceleration model \
0 33.0 4 91 53 1795 17.4 76
1 20.0 6 225 100 3651 17.7 76
2 18.0 6 250 78 3574 21.0 76
3 18.5 6 250 110 3645 16.2 76
4 17.5 6 258 95 3193 17.8 76
origin car
0 3 honda civic
1 1 dodge aspen se
2 1 ford granada ghia
3 1 pontiac ventura sj
4 1 amc pacer d/l
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, "mpg":"car"]
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.head()
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000, high=73001, size=398, dtype='l')
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv', sep=',')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv', sep=',')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, "mpg":"car"]
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(15000, high=73001, size=398, dtype='l')
nr_owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv")
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv")
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, "mpg":"car"]
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.shape
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(low=15000, high=73000, size=len(cars))
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars["owners"] = owners
cars.tail()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1=pd.read_csv('cars1.csv')
cars2=pd.read_csv('cars2.csv')
###Output
mpg cylinders displacement horsepower weight acceleration model \
0 18.0 8 307 130 3504 12.0 70
1 15.0 8 350 165 3693 11.5 70
2 18.0 8 318 150 3436 11.0 70
3 16.0 8 304 150 3433 12.0 70
4 17.0 8 302 140 3449 10.5 70
origin car Unnamed: 9 Unnamed: 10 Unnamed: 11 \
0 1 chevrolet chevelle malibu NaN NaN NaN
1 1 buick skylark 320 NaN NaN NaN
2 1 plymouth satellite NaN NaN NaN
3 1 amc rebel sst NaN NaN NaN
4 1 ford torino NaN NaN NaN
Unnamed: 12 Unnamed: 13
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
--------------------------------------------------
mpg cylinders displacement horsepower weight acceleration model \
0 33.0 4 91 53 1795 17.4 76
1 20.0 6 225 100 3651 17.7 76
2 18.0 6 250 78 3574 21.0 76
3 18.5 6 250 110 3645 16.2 76
4 17.5 6 258 95 3193 17.8 76
origin car
0 3 honda civic
1 1 dodge aspen se
2 1 ford granada ghia
3 1 pontiac ventura sj
4 1 amc pacer d/l
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.head()
#后几列 没有值
# 直接切出来有值的
cars1=cars1.loc[:,'mpg':'car']
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
# print(cars1.shape[0])
# print(cars2.shape[0])
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars=cars1.append(cars2)
cars.shape
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners=np.random.randint(15000,73001,398)
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
# cars['owners']=pd.Series(owners)
cars['owners']=owners
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv")
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv")
cars1
cars2
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, "mpg":"car"]
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape)
print(cars2.shape)
###Output
(198, 9)
(200, 9)
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners_series = np.random.randint(15000, high=73001, size=398, dtype='l')
owners_series
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners_series
cars.tail()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
cars1 = pd.read_csv(url1)
cars2 = pd.read_csv(url2)
print(cars1.head())
print(cars2.head())
###Output
mpg cylinders displacement horsepower weight acceleration model \
0 18.0 8 307 130 3504 12.0 70
1 15.0 8 350 165 3693 11.5 70
2 18.0 8 318 150 3436 11.0 70
3 16.0 8 304 150 3433 12.0 70
4 17.0 8 302 140 3449 10.5 70
origin car Unnamed: 9 Unnamed: 10 Unnamed: 11 \
0 1 chevrolet chevelle malibu NaN NaN NaN
1 1 buick skylark 320 NaN NaN NaN
2 1 plymouth satellite NaN NaN NaN
3 1 amc rebel sst NaN NaN NaN
4 1 ford torino NaN NaN NaN
Unnamed: 12 Unnamed: 13
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
mpg cylinders displacement horsepower weight acceleration model \
0 33.0 4 91 53 1795 17.4 76
1 20.0 6 225 100 3651 17.7 76
2 18.0 6 250 78 3574 21.0 76
3 18.5 6 250 110 3645 16.2 76
4 17.5 6 258 95 3193 17.8 76
origin car
0 3 honda civic
1 1 dodge aspen se
2 1 ford granada ghia
3 1 pontiac ventura sj
4 1 amc pacer d/l
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, 'mpg':'car']
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0])
print(cars2.shape[0])
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
nr_owners = np.random.randint(15000, high=73001, size=398, dtype='l')
nr_owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = nr_owners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, ~ cars1.columns.str.startswith('Unnamed')]
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.shape[0], cars2.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2])
cars.head()
###Output
_____no_output_____
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000. Step 8. Add the column owners to cars
###Code
cars['owners'] = np.random.randint(15000, 73000, cars.shape[0])
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv",sep=",")
cars2 = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv")
cars1.head(5)
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.drop(cars1.columns[[9,10,11,12,13]], axis = 1, inplace = True)
cars1.head(5)
cars2.head(5)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
cars1.head(5)
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.shape
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
cars["owners"] =np.random.randint(15000, 73000, size=len(cars))
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars.head(5)
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv).
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
###Output
_____no_output_____
###Markdown
Step 3. Assign each to a variable called cars1 and cars2
###Code
cars2.head()
cars1.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
try:
cars1 = cars1.drop(['Unnamed: 9', 'Unnamed: 10', 'Unnamed: 11', 'Unnamed: 12','Unnamed: 13'], axis = 1)
except:
print('Already cleaned')
cars1.head()
###Output
Already cleaned
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
n1 = cars1.shape[0]
n2 = cars2.shape[0]
print(n1,n2)
###Output
198 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars2.head()
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
rns = np.random.randint(15000, 73000, cars.shape[0])
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = rns
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
cars1.head()
#print(cars2.head())
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.dropna(axis=1)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print("cars1: \n",cars1.info())
print("cars2: \n",cars2.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 198 entries, 0 to 197
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 198 non-null float64
1 cylinders 198 non-null int64
2 displacement 198 non-null int64
3 horsepower 198 non-null object
4 weight 198 non-null int64
5 acceleration 198 non-null float64
6 model 198 non-null int64
7 origin 198 non-null int64
8 car 198 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.0+ KB
cars1:
None
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 200 non-null float64
1 cylinders 200 non-null int64
2 displacement 200 non-null int64
3 horsepower 200 non-null object
4 weight 200 non-null int64
5 acceleration 200 non-null float64
6 model 200 non-null int64
7 origin 200 non-null int64
8 car 200 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 14.2+ KB
cars2:
None
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
cars.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 398 entries, 0 to 199
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 398 non-null float64
1 cylinders 398 non-null int64
2 displacement 398 non-null int64
3 horsepower 398 non-null object
4 weight 398 non-null int64
5 acceleration 398 non-null float64
6 model 398 non-null int64
7 origin 398 non-null int64
8 car 398 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 31.1+ KB
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000, high=73001, size=398, dtype='l') # pd.Series(np.random.randint(low=15000, high=73000, size=(cars.shape[0],)))
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.head()
cars1 = cars1.loc[:, 'mpg' : 'car']
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(cars1.shape[0], cars2.shape[0])
###Output
198 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2], join = 'outer')
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000, high = 73001, size = 398, dtype = 'l')
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owner'] = owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
url1 = "https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv"
url2 = "https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv"
cars1 = pd.read_csv(url1)
cars2 = pd.read_csv(url2)
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.drop(["Unnamed: 9","Unnamed: 10","Unnamed: 11", "Unnamed: 12", "Unnamed: 13"], axis =1, inplace = True)
cars1
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print("the length of cars1 is : " + str(len(cars1)))
print("the length of cars2 is : " + str(len(cars2)))
###Output
the length of cars1 is : 198
the length of cars2 is : 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = cars1.append(cars2)
# re-indexing the table after the appending
cars.index = range(len(cars))
cars.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 398 entries, 0 to 397
Data columns (total 9 columns):
mpg 398 non-null float64
cylinders 398 non-null int64
displacement 398 non-null int64
horsepower 398 non-null object
weight 398 non-null int64
acceleration 398 non-null float64
model 398 non-null int64
origin 398 non-null int64
car 398 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 28.1+ KB
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = np.random.randint(15000,73001, size = len(cars))
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars["owners"] = owners
cars
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv')
cars2 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv')
print(cars1.head(3))
print(cars2.head(3))
###Output
mpg cylinders displacement horsepower weight acceleration model \
0 18.0 8 307 130 3504 12.0 70
1 15.0 8 350 165 3693 11.5 70
2 18.0 8 318 150 3436 11.0 70
origin car Unnamed: 9 Unnamed: 10 Unnamed: 11 \
0 1 chevrolet chevelle malibu NaN NaN NaN
1 1 buick skylark 320 NaN NaN NaN
2 1 plymouth satellite NaN NaN NaN
Unnamed: 12 Unnamed: 13
0 NaN NaN
1 NaN NaN
2 NaN NaN
mpg cylinders displacement horsepower weight acceleration model \
0 33.0 4 91 53 1795 17.4 76
1 20.0 6 225 100 3651 17.7 76
2 18.0 6 250 78 3574 21.0 76
origin car
0 3 honda civic
1 1 dodge aspen se
2 1 ford granada ghia
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.iloc[:, :-5]
cars1.head(3)
# Original solution
#
# cars1 = cars1.loc[:, "mpg":"car"]
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(f'cars1 observations: {cars1.shape[0]}')
print(f'cars2 observations: {cars2.shape[0]}')
###Output
cars1 observations: 198
cars2 observations: 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars1.head(3)
cars2.tail(3)
cars = cars1.merge(cars2, how='outer')
cars.shape
# Original solution:
#
# cars = cars1.append(cars2)
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
# owners = pd.Series(range(15000, 73001)) # <== WRONG
# owners.head()
owners = np.random.randint(15000, high=73001, size=cars.shape[0], dtype='l') # FIXED
owners
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars['owners'] = owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
from numpy import random
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv("./cars1.csv")
cars2 = pd.read_csv("./cars2.csv")
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1 = cars1.loc[:, :"car"]
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print("Cars1 Observations: {}".format(cars1.shape[0]))
print("Cars2 Observations: {}".format(cars2.shape[0]))
###Output
Cars1 Observations: 198
Cars2 Observations: 200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2], ignore_index=True)
cars.head()
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
owners = random.randint(15000, 73000, 398)
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars["owners"] = owners
cars.head()
###Output
_____no_output_____
###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). Step 3. Assign each to a variable called cars1 and cars2
###Code
cars1 = pd.read_csv('cars1.csv')
cars2 = pd.read_csv('cars2.csv')
cars1.head()
cars2.head()
###Output
_____no_output_____
###Markdown
Step 4. Ops it seems our first dataset has some unnamed blank columns, fix cars1
###Code
cars1.drop(columns = [col for col in cars1 if "Unnamed" in col] , inplace = True)
cars1.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
print(len(cars1))
print(len(cars2))
###Output
198
200
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
cars = pd.concat([cars1, cars2])
cars.head()
###Output
_____no_output_____
###Markdown
Step 7. Ops there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
import numpy as np
cars['owners'] = np.random.randint(15_000, 73_000,size = (len(cars)))
cars.head()
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
cars.tail()
###Output
_____no_output_____ |
python_package/Tests/MakeDataSet/stitchtest/.ipynb_checkpoints/Untitled-checkpoint.ipynb | ###Markdown
For new image
###Code
vobj2 =VideoStiching.VideoStiching('../../../../videoreader/SolveMap626/626EXP.avi')
#vobj2.extract_relationship()
vobj2.load_data()
vobj2.Optimization(0.075)
vobj2.show_stitched('output626_POC.mp4')
vobj2.load_data()
plt.imshow(vobj2.PeakMat,vmin=0,vmax=1)
vobj2.load_FP()
vobj2.getPeak_FP(40)
vobj2.Optimization()
vobj2.show_stitched('output626_SIFT.mp4')
from matplotlib import pyplot as plt
%matplotlib inline
plt.subplot(2,2,1)
plt.imshow(vobj2.inliersNum,vmin=0,vmax=vobj2.inliersNum.max())
plt.subplot(2,2,2)
plt.imshow(vobj2.matchedNum,vmin=0,vmax=vobj2.matchedNum.max())
plt.subplot(2,2,3)
plt.imshow(vobj2.inliersNum/vobj2.matchedNum,vmin=0,vmax=1)
plt.subplot(2,2,4)
plt.imshow(vobj2.PeakMat,vmin=0,vmax=1)
###Output
C:\Users\yoshi\Anaconda3\envs\tf35\lib\site-packages\ipykernel_launcher.py:8: RuntimeWarning: invalid value encountered in true_divide
###Markdown
Test for new function
###Code
class VideoStiching:
def __init__(self,videoname):
vidcap = cv2.VideoCapture(videoname)
vnames = videoname.replace('/', '.').split('.')
self.vname = vnames[-2]
success,image = vidcap.read()
if not(success):
print('Cannot open the video!')
exit(-1)
self.frames = []
self.cframes = []
self.frames.append(self.gray(image))
self.cframes.append(image)
self.framenum = 1
while(vidcap.isOpened()):
success,image = vidcap.read()
if success:
self.framenum += 1
self.frames.append(self.gray(image))
self.cframes.append(image)
else:
break
def extract_relationship(self):
self.xMat = np.zeros((self.framenum,self.framenum),dtype=float)
self.yMat = np.zeros((self.framenum,self.framenum),dtype=float)
self.thMat = np.zeros((self.framenum,self.framenum),dtype=float)
self.sMat = np.zeros((self.framenum,self.framenum),dtype=float)
self.PeakMat = np.zeros((self.framenum,self.framenum),dtype=float)
for i in range (0,self.framenum-1):
match = imregpoc.imregpoc(self.frames[i],self.frames[i+1])
x,y,th,s = match.getParam()
peak = match.getPeak()
self.xMat[i,i+1] = x
self.yMat[i,i+1] = y
self.thMat[i,i+1] = th
self.sMat[i,i+1] = s
self.PeakMat[i,i+1] = peak
for j in range (i+1,self.framenum):
match.match_new(self.frames[j])
x,y,th,s = match.getParam()
peak = match.getPeak()
self.xMat[i,j] = x
self.yMat[i,j] = y
self.thMat[i,j] = th
self.sMat[i,j] = s
self.PeakMat[i,j] = peak
print('['+str(i)+','+str(j)+']', end='\r')
def gray(self,frame):
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
return gray
def save_data(self):
#import os
#TARGET_DIR = self.vname
#if not os.path.isdir(TARGET_DIR):
# os.makedirs(TARGET_DIR)
saveMat = np.concatenate([self.xMat,self.yMat,self.thMat,self.sMat,self.PeakMat], axis=0)
output = self.vname+'.csv'
np.savetxt(output,saveMat)
def load_data(self,output=None):
if output==None:
saveMat = np.loadtxt(self.vname+'.csv',delimiter=' ')
else:
saveMat = np.loadtxt(output,delimiter=',')
Mats = np.split(saveMat,5,axis=0)
self.xMat = Mats[0]
self.yMat = Mats[1]
self.thMat = Mats[2]
self.sMat = Mats[3]
self.PeakMat = Mats[4]
def solve_mat(self,vMat,wMat):
#hei,wid = wMat.shape
# step 1: AWA
diagAvec = wMat.sum(axis=0).T + wMat.sum(axis=1)
diagA = np.diag(diagAvec[1:])
tri = wMat[1:,1:]
A = -tri + diagA - tri.T
# step 2: AWy
bmat = wMat * vMat
Bb = bmat.sum(axis=0).T - bmat.sum(axis=1)
B = Bb[1:]
# step 3: AWA^-1 AWy
v = np.dot(np.linalg.inv(A),B)
return v
def reduceMat(self,number):
buf = np.delete(self.sMat,number,0)
self.sMat = np.delete(buf,number,1)
buf = np.delete(self.thMat,number,0)
self.thMat = np.delete(buf,number,1)
buf = np.delete(self.xMat,number,0)
self.xMat = np.delete(buf,number,1)
buf = np.delete(self.yMat,number,0)
self.yMat = np.delete(buf,number,1)
buf = np.delete(self.PeakMat,number,0)
self.PeakMat = np.delete(buf,number,1)
ind2remove = number[0]
for i in sorted(ind2remove, reverse=True):
del self.frames[i]
del self.cframes[i]
def check_valid_mat(self,wMat):
diagAvec = wMat.sum(axis=0).T + wMat.sum(axis=1)
badpart = np.where(diagAvec<2)
badnum = len(badpart[0])
if badnum > 0:
print('Bad frames exists! Delete it.')
# delete bad features
print(badpart[0])
self.reduceMat(badpart)
buf = np.delete(wMat,badpart,0)
wMat = np.delete(buf,badpart,1)
self.framenum -= badnum
return wMat
return wMat
def Optimization(self,threshold=None):
# 1: get a weight matrix
if threshold ==None:
threshold = 0.06
wMat = self.PeakMat
wMat[wMat <threshold] = 0
wMat = self.check_valid_mat(wMat)
# 2-1 optimize theta
vth = self.solve_mat(self.thMat,wMat)
vth = np.append([0],vth,axis=0)
# 2-2 optimize kappa
logsMat = np.log(self.sMat)
logsMat[logsMat==-np.inf] = 0
vlogs = self.solve_mat(logsMat,wMat)
vs = np.exp(vlogs)
vs = np.append([1],vs,axis=0)
# 2-3 optimize x and y
CtaMap = np.tile(vth.reshape(self.framenum,1),(1,self.framenum))
ScaleMap = np.tile(vs.reshape(self.framenum,1),(1,self.framenum))
# conversion matrix
csMap = np.cos(CtaMap)*ScaleMap
snMap = -np.sin(CtaMap)*ScaleMap
# convert x and y
tr_xMat = self.xMat * csMap + self.yMat * snMap
tr_yMat = -self.xMat * snMap + self.yMat * csMap
# solve
vx = np.append([0],self.solve_mat(tr_xMat,wMat),axis=0)
vy = np.append([0],self.solve_mat(tr_yMat,wMat),axis=0)
self.results = np.concatenate([vx,vy,vth,vs],axis=0).reshape(4,self.framenum).T
def load_results(self,fname):
self.results = np.loadtxt(fname,delimiter=',')
def show_stitched(self,vname='output.avi'):
self.match = imregpoc.imregpoc(self.frames[0],self.frames[0])
hei, wid = self.frames[0].shape
center =[hei/2,wid/2]
sxmax = wid-1
sxmin = 0
symax = hei-1
symin = 0
Perspectives = np.zeros((self.framenum,3,3),dtype=float)
# get panorama image size
for i in range (0,self.framenum-1):
perspect = self.match.poc2warp(center,self.results[i])
xmin,ymin,xmax,ymax = self.match.convertRectangle(perspect)
sxmax = max(xmax,sxmax)
sxmin = min(xmin,sxmin)
symax = max(ymax,symax)
symin = min(ymin,symin)
Perspectives[i] = perspect
swidth,sheight = sxmax-sxmin+1,symax-symin+1
xtrans,ytrans = 0-sxmin,0-symin
Trans = np.float32([1,0,xtrans , 0,1,ytrans, 0,0,1]).reshape(3,3)
self.panorama = np.zeros((sheight,swidth,3))
#cv2.cvtColor(self.panorama,CV_G)
#self.panorama[ytrans:ytrans+hei,xtrans:xtrans+wid] = self.match.ref
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
#fourcc=cv2.VideoWriter_fourcc('W', 'M', 'V', '2')
#Vidout = cv2.VideoWriter(vname,fourcc, 30.0, (swidth,sheight),0)
Vidout = cv2.VideoWriter(vname,fourcc, 30.0, (swidth,sheight))
# stiching
for i in range (0,self.framenum-1):
newTrans = np.dot(Trans,np.linalg.inv(Perspectives[i]))
warpedimage = cv2.warpPerspective(self.cframes[i],newTrans,(swidth,sheight),flags=cv2.INTER_LINEAR+cv2.WARP_FILL_OUTLIERS)
mask = cv2.warpPerspective(np.ones((hei,wid)),newTrans,(swidth,sheight),flags=cv2.INTER_LINEAR+cv2.WARP_FILL_OUTLIERS)
mask[mask < 1] = 0
Imask = np.ones((sheight,swidth),dtype=float)-mask
self.panorama[,,0] = self.panorama[,,0]*Imask + warpedimage[,,0]*mask
self.panorama[,,1] = self.panorama[,,1]*Imask + warpedimage[,,1]*mask
self.panorama[,,2] = self.panorama[,,2]*Imask + warpedimage[,,2]*mask
forwrite = self.panorama.astype(np.uint8)
Vidout.write(forwrite)
cv2.imshow('panorama',self.panorama/255)
cv2.waitKey(5)
cv2.imwrite('panoramaimg.png',self.panorama)
cv2.waitKey(0)
Vidout.release()# release video object
cv2.destroyAllWindows()
def extract_relationship_FP(self):
self.xMat = np.zeros((self.framenum,self.framenum),dtype=float)
self.yMat = np.zeros((self.framenum,self.framenum),dtype=float)
self.thMat = np.zeros((self.framenum,self.framenum),dtype=float)
self.sMat = np.zeros((self.framenum,self.framenum),dtype=float)
self.matchedNum = np.zeros((self.framenum,self.framenum),dtype=float)
self.inliersNum = np.zeros((self.framenum,self.framenum),dtype=float)
for i in range (0,self.framenum-1):
match = imregpoc.TempMatcher(self.frames[i],'SIFT')
for j in range (i+1,self.framenum):
param,counts,inlier = match.match(self.frames[j])
x,y,th,s = param
self.xMat[i,j] = x
self.yMat[i,j] = y
self.thMat[i,j] = th/180*math.pi
self.sMat[i,j] = s
self.matchedNum[i,j] = counts
self.inliersNum[i,j] = inlier
print('['+str(i)+','+str(j)+']', end='\r')
saveMat = np.concatenate([self.xMat,self.yMat,self.thMat,self.sMat,self.matchedNum,self.inliersNum], axis=0)
output = self.vname+'_FP'+'.csv'
np.savetxt(output,saveMat,delimiter=',')
def load_FP(self):
output = self.vname+'_FP'+'.csv'
readMat = np.loadtxt(output,delimiter=',')
Mats = np.split(readMat,6,axis=0)
self.xMat = Mats[0]
self.yMat = Mats[1]
self.thMat = Mats[2]
self.sMat = Mats[3]
self.matchedNum = Mats[4]
self.inliersNum = Mats[5]
def getPeak_FP(self,threshold = 0):
if threshold == 0:
threshold = 50
self.PeakMat = np.copy(self.inliersNum)
self.PeakMat[self.PeakMat<threshold]=0
self.PeakMat[self.PeakMat>=threshold]=1
def ShapedOptimization(self,wMat=None):
# 1: get a weight matrix
if wMat ==None:
wMat = np.triu(np.ones(self.framenum),1) - np.triu(np.ones(self.framenum),2)
# 2-1 optimize theta
vth = self.solve_mat(self.thMat,wMat)
vth = np.append([0],vth,axis=0)
# 2-2 optimize kappa
logsMat = np.log(self.sMat)
logsMat[logsMat==-np.inf] = 0
vlogs = self.solve_mat(logsMat,wMat)
vs = np.exp(vlogs)
vs = np.append([1],vs,axis=0)
# 2-3 optimize x and y
CtaMap = np.tile(vth.reshape(self.framenum,1),(1,self.framenum))
ScaleMap = np.tile(vs.reshape(self.framenum,1),(1,self.framenum))
# conversion matrix
csMap = np.cos(CtaMap)*ScaleMap
snMap = -np.sin(CtaMap)*ScaleMap
# convert x and y
tr_xMat = self.xMat * csMap + self.yMat * snMap
tr_yMat = -self.xMat * snMap + self.yMat * csMap
# solve
vx = np.append([0],self.solve_mat(tr_xMat,wMat),axis=0)
vy = np.append([0],self.solve_mat(tr_yMat,wMat),axis=0)
self.results = np.concatenate([vx,vy,vth,vs],axis=0).reshape(4,self.framenum).T
import pickle
with open('read_video_1228.pkl', 'wb') as output:
pickle.dump(videoobj, output, pickle.HIGHEST_PROTOCOL)
vobj3 =VideoStiching('../../../../videoreader/1228/1228.avi')
vobj3.load_data()
vobj3.Optimization(0.12)
vobj3.show_stitched('output_newPOC.mp4')
vobj3 =VideoStiching('../../../../videoreader/1228/1228.avi')
vobj3.load_FP()
vobj3.getPeak_FP(50)
vobj3.Optimization()
vobj3.show_stitched('output1228_reducedSIFT.mp4')
vobj3 =VideoStiching('../../../../videoreader/1228/1228.avi')
vobj3.load_data()
vobj3.ShapedOptimization()
vobj3.show_stitched('output_noOptPOC.mp4')
###Output
C:\Users\yoshi\Anaconda3\envs\tf35\lib\site-packages\ipykernel_launcher.py:264: RuntimeWarning: divide by zero encountered in log
../..\imregpoc.py:270: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
if perspective == None:
###Markdown
vA.shape
###Code
vobj3 =VideoStiching('../../../../videoreader/1228/1228.avi')
vobj3.load_FP()
vobj3.getPeak_FP(10)
vobj3.ShapedOptimization()
vobj3.show_stitched('output1228_noOptSIFT.mp4')
###Output
C:\Users\yoshi\Anaconda3\envs\tf35\lib\site-packages\ipykernel_launcher.py:264: RuntimeWarning: divide by zero encountered in log
../..\imregpoc.py:270: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
if perspective == None:
###Markdown
for New image
###Code
vobj3 =VideoStiching('../../../../videoreader/SolveMap626/626EXP.avi')
vobj3.load_data()
vobj3.ShapedOptimization()
vobj3.show_stitched('output626c_noOptPOC.mp4')
vobj3 =VideoStiching('../../../../videoreader/SolveMap626/626EXP.avi')
vobj3.load_FP()
vobj3.getPeak_FP(10)
vobj3.ShapedOptimization()
vobj3.show_stitched('output626_noOptSIFT.mp4')
vobj3.load_data()
vobj3.Optimization()
vobj3.show_stitched('output626_OptPOC.mp4')
vobj3.load_FP()
vobj3.getPeak_FP(10)
vobj3.Optimization()
vobj3.show_stitched('output626_OptSIFT.mp4')
# poc
vobj.load_data()
vobj.Optimization(0.075)
vobj.show_stitched()
from matplotlib import pyplot as plt
%matplotlib inline
plt.subplot(2,2,1)
plt.imshow(vobj.inliersNum,vmin=0,vmax=vobj.inliersNum.max())
plt.subplot(2,2,2)
plt.imshow(vobj.matchedNum,vmin=0,vmax=vobj.matchedNum.max())
plt.subplot(2,2,3)
plt.imshow(vobj.inliersNum/vobj.matchedNum,vmin=0,vmax=1)
plt.subplot(2,2,4)
plt.imshow(vobj.PeakMat,vmin=0,vmax=1)
vobj.Optimization()
vobj.show_stitched()
###Output
C:\Users\yoshi\Anaconda3\envs\tf35\lib\site-packages\ipykernel_launcher.py:115: RuntimeWarning: divide by zero encountered in log
..\imregpoc.py:270: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
if perspective == None:
###Markdown
testfor
###Code
vobj2 =VideoStiching('../../../videoreader/SolveMap626/626EXP.avi')
#vobj2.extract_relationship()
vobj2.load_data()
vobj2.Optimization(0.075)
vobj2.show_stitched()
vobj2.save_data()
vobj2.extract_relationship_FP()
vobj2.PeakMat = np.copy(vobj2.inliersNum)
vobj2.PeakMat[vobj2.PeakMat<30] = 0
vobj2.PeakMat[vobj2.PeakMat>=30] = 1
vobj2.show_stitched()
###Output
..\imregpoc.py:270: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
if perspective == None:
|
ch4_pandas_graph_anscombe.ipynb | ###Markdown
그래프 그리기 - 데이터 시각화가 필요한 이유 1) 앤스콤 4분할 그래프 살펴보기 데이터 시각화를 보여주는 전형적인 사례로 앤스콤 4분할 그래프가 있다. 이 그래프는 영국의 프랭크 앤스콤이 데이터시각화하지 않고 수치만 확인할 때 발생할 수 있는 함정을 보여주기 위해 만든 그래프이다. 여기서 함정을 무엇일까?앤스콤 4분할 그래프를 구성하는 데이터 집합은 4개의 그룹을 구성되어 있다. 이 4개의 그룹은 각각 평균, 분산과 같은 수칫값이나 상관관계, 회귀선이 같다는 특징이 있다. 그래서 이런 결과만 보고 4개의 그룹의 데이터가 모두 같을 것이라는 함정이 이것이다. 따라서 우리는 각 데이터를 시각화하여 데이터 그룹이 서로 다른 패턴을 가지고 있다는 것을 가지고 있다는 것을 금방 알 수 있다. - 앤스콤 데이터 집합 불러오기앤스콤 데이터 집합은 seaborn 라이브러리에 포함되어 있다. seaborn 라이브러리의 load_dataset 메섣에 문자열 anscombe을 전달하면 앤스콤 데이터 집합을 불러 올 수 있다.
###Code
import seaborn as sns
anscombe = sns.load_dataset("anscombe")
print(anscombe)
print(type(anscombe))
###Output
dataset x y
0 I 10.0 8.04
1 I 8.0 6.95
2 I 13.0 7.58
3 I 9.0 8.81
4 I 11.0 8.33
5 I 14.0 9.96
6 I 6.0 7.24
7 I 4.0 4.26
8 I 12.0 10.84
9 I 7.0 4.82
10 I 5.0 5.68
11 II 10.0 9.14
12 II 8.0 8.14
13 II 13.0 8.74
14 II 9.0 8.77
15 II 11.0 9.26
16 II 14.0 8.10
17 II 6.0 6.13
18 II 4.0 3.10
19 II 12.0 9.13
20 II 7.0 7.26
21 II 5.0 4.74
22 III 10.0 7.46
23 III 8.0 6.77
24 III 13.0 12.74
25 III 9.0 7.11
26 III 11.0 7.81
27 III 14.0 8.84
28 III 6.0 6.08
29 III 4.0 5.39
30 III 12.0 8.15
31 III 7.0 6.42
32 III 5.0 5.73
33 IV 8.0 6.58
34 IV 8.0 5.76
35 IV 8.0 7.71
36 IV 8.0 8.84
37 IV 8.0 8.47
38 IV 8.0 7.04
39 IV 8.0 5.25
40 IV 19.0 12.50
41 IV 8.0 5.56
42 IV 8.0 7.91
43 IV 8.0 6.89
<class 'pandas.core.frame.DataFrame'>
###Markdown
- matplotlib 라이브러리로 그래프 그리기앤스콤 데이터 집합이 준비가 되었으면 그래프를 시각화를 해보자. 그래프를 그리기 위해서 matplotlib 라이브러리를 사용한다.
###Code
%matplotlib notebook
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
다음은 anscombe 데이터프레임의 dataset 열에서 데이터 값이 I 인 것만 추출한 것이다.선 그래프는 plot 메서드를 사용한다. 앞에서 준비한 data_set_1의 x,y 열을 전달하자.
###Code
dataset_1 = anscombe[anscombe['dataset'] == 'I']
plt.plot(dataset_1['x'], dataset_1['y'])
###Output
_____no_output_____
###Markdown
plot 메서드는 기본적으로 선으로 그래프를 그린다. 만약 점으로 그래프를 그리려면 ' o '를 세 번째 인자로 전달하면 된다.
###Code
plt.plot(dataset_1['x'], dataset_1['y'], 'o')
###Output
_____no_output_____
###Markdown
2) 앤스콤 데이터 집합 모두 사용해 그래프 그리기 앤스콤 데이터 집합은 4개의 데이터 그룹으로 구성되어 있으며 각 데이터 그룹의 차이를 파악하려면 그래프로 시각화 해야한다. 이번엔 모든 데이터 그룹에 대해 그래프를 그려보자. - matplotlib 라이브러리로 그래프 그리기이번에 배워볼 것은 matplotlib 라이브러리로 그래프를 그리는 것이다. 아래에 정리한 순서를 보면 블록을 조립하는 과정과 비슷하다는 것을 느낄 것이다.1. 전체그래프가 위치할 기본 틀을 만든다.2. 그래프를 그려 넣을 그래프 격자를 만든다.3. 그런 다음 격자에 그래프를 하나씩 추가한다. 격자에 그래프가 추가되는 순서는 왼쪽에서 오른쪽 방향이다.4. 만약 격자의 첫 번째 행이 꽉 차면 두 번째 행에 그래프를 그려 넣습니다.즉 아래 실습에서 앤스콤 데이터 집합으로 그리게 될 그래프의 격자 크기는 4이고 세 번째 그래프의 경우 2행 1열 위치에 그려진다. - 한번에 4개의 그래프 그리기앤스콤 데이터 프레임의 dataset 열의 값이 1,2,3,4 인 것을 불린 추출하자.
###Code
dataset_2 = anscombe[anscombe['dataset'] == 'II']
dataset_3 = anscombe[anscombe['dataset'] == 'III']
dataset_4 = anscombe[anscombe['dataset'] == 'IV']
###Output
_____no_output_____
###Markdown
그래프의 격자가 위치할 기본 틀을 만든다.
###Code
fig = plt.figure()
axes1 = fig.add_subplot(2,2,1)
axes2 = fig.add_subplot(2,2,2)
axes3 = fig.add_subplot(2,2,3)
axes4 = fig.add_subplot(2,2,4)
###Output
_____no_output_____
###Markdown
이제 plot 메서드에 데이터를 전달하여 그래프를 그려보자.그래프를 확인하려면 fig를 반드시 입력해야 한다.
###Code
axes1.plot(dataset_1['x'], dataset_1['y'], 'o')
axes2.plot(dataset_2['x'], dataset_2['y'], 'o')
axes3.plot(dataset_3['x'], dataset_3['y'], 'o')
axes4.plot(dataset_4['x'], dataset_4['y'], 'o')
fig
###Output
_____no_output_____
###Markdown
각각의 그래프를 쉽게 구분할 수 있도록 그래프 격자에 제목을 추가해 보자. set_title 메서드로 그래프 이름을 전달하면 된다.
###Code
axes1.set_title("dataset_1")
axes2.set_title("dataset_2")
axes3.set_title("dataset_3")
axes4.set_title("dataset_4")
fig
###Output
_____no_output_____
###Markdown
기본 틀 에도 제목을 추가해 보자. 기본 틀에 제목을 추가하려면 suptitle 메서드를 사용하면 된다.
###Code
fig.suptitle("Anscombe Data")
fig
###Output
_____no_output_____
###Markdown
하지만 위에서 그래프를 보면 이름과 숫자가 겹쳐 보인다. 이럴때에는 tight_layout 메서드를 호출하자
###Code
fig.tight_layout()
fig
###Output
_____no_output_____ |
site/ru/guide/upgrade.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Автоматическое обновление кода до TensorFlow 2 Смотрите на TensorFlow.org Запустите в Google Colab Изучайте код на GitHub Скачайте ноутбук Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). TensorFlow 2.0 включает много изменений API, таких как изменение порядка аргументов, переименование символов и изменение значений по умолчанию для параметров. Ручное исправление всех этих модификаций утомительно и подвержено ошибкам. Чтобы упростить изменения и сделать ваш переход на TF 2.0 как можно более плавным, команда TensorFlow создала утилиту `tf_upgrade_v2`, помогающую перейти от legacy кода к новому API.Примечание: `tf_upgrade_v2` устанавливается автоматически для TensorFlow 1.13 и более поздних версий (включая все сборки TF 2.0).Типичное использование выглядит так:tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txtЭто ускорит процесс обновления за счет конвертации существующих скриптов TensorFlow 1.x Python в TensorFlow 2.0.Скрипт конвертации максимально автоматизирует процесс, но все еще существуют синтаксические и стилистические изменения, которые не могут быть выполнены скриптом. Модули совместимостиНекоторые символы API не могут быть обновлены просто с использованием замены строк. Чтобы гарантировать поддержку вашего кода в TensorFlow 2.0, скрипт обновления включает в себя модуль `compat.v1`. Этот модуль заменяет символы TF 1.x, такие как `tf.foo`, на эквивалентную ссылку` tf.compat.v1.foo`. Хотя модуль совместимости хорош, мы рекомендуем вам вручную вычитать замены и перенести их на новые API в пространстве имен `tf. *` вместо пространства имен `tf.compat.v1` как можно быстрее.Из-за депрекации модулей TensorFlow 2.x (например, `tf.flags` и` tf.contrib`) некоторые изменения не могут быть обойдены путем переключения на `compat.v1`. Обновление этого кода может потребовать использования дополнительной библиотеки (например, [`absl.flags`] (https://github.com/abseil/abseil-py)) или переключения на пакет в [tenorflow / addons] (http: //www.github.com/tensorflow/addons). Рекомендуемый процесс обновленияОставшаяся часть руководства демонстрирует использование скрипта обновления. Хоть скрипт обновления прост в использовании, очень рекомендуем вам использовать скрипт как часть следующего процесса: 1. ** Модульное тестирование **: убедитесь, что в обновляемом коде имеется набор модульных тестов с разумным охватом. Это код Python, поэтому язык не защитит вас от многих классов ошибок. Также убедитесь, что все ваши зависимости были обновлены до совместимых с TensorFlow 2.0. 1. **Установите TensorFlow 1.14**: Обновите ваш TensorFlow до последней версии TensorFlow 1.x, как минимум 1.14. Она включает финальный API TensorFlow 2.0 в `tf.compat.v2`.1. **Протестируйте с 1.14**: Убедитесь, что ваши модульные тесты проходят на этом этапе. Вы будете повторно запускать их в процессе обновления поэтому важно начать с зеленого цвета.1. **Запустите скрипт обновления**: Запустите `tf_upgrade_v2` на всем дереве исходного кода включая тесты. Это обновит ваш код до формата в котором он использует только символы доступные в TensorFlow 2.0. Устаревшие символы будут доступны с `tf.compat.v1`. Это впоследствии потребует ручного внимания, но не сразу.1. **Запустите ковертированные тесты с TensorFlow 1.14**: Ваш код должен все еще запускаться правильно в TensorFlow 1.14. Запустите снова модульные тесты. Любая ошибка в ваших тестах на этом этапе значит, что в скрипте обновления есть ошибка. [Сообщите нам пожалуйста об этом](https://github.com/tensorflow/tensorflow/issues).1. **Проверьте отчет обновления на наличие предупреждений и ошибок**: Скрипт пишет файл отчета объясняющий все конвертации которые вам нужно перепроверить, или все действия которые нужно совершить вручную. Например: Любые оставшиеся экземпляры contrib требуют ручного удаления. Пожалуйста, обратитесь к [RFC для получения дополнительных инструкций](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Установите TensorFlow 2.0**: В этом месте переключение на TensorFlow 2.0 должно быть безопасно.1. **Протестируйте с `v1.disable_v2_behavior`**: Перезапустите ваши тесты с `v1.disable_v2_behavior ()` в основной функции тестов результаты должны быть те же, что и при запуске под 1.14.1. **Включите V2 Behavior**: Сейчас, когда ваши тесты работают с использованием API v2, вы можете начать смотреть включение v2 behavior. В зависимости от того, как написан ваш код, это может потребовать некоторых изменений. См. [Руководство по миграции] (migrate.ipynb) для деталей. Использование скрипта обновления УстановкаПеред началом убедитесь, что TensorlFlow 2.0 установлен.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Склонируйте git репозиторий [tensorflow/models](https://github.com/tensorflow/models) чтобы у вас был какой-нибудь код для проверки:
###Code
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Прочитайте helpСкрипт должен быть установлен с TensorFlow. Здесь встроенная помощь:
###Code
!tf_upgrade_v2 -h
###Output
_____no_output_____
###Markdown
Пример кода TF1 Здесь простой скрипт TensorFlow 1.0:
###Code
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
###Output
_____no_output_____
###Markdown
С установленным TensorFlow 2.0 он не запускается:
###Code
!(cd models/samples/cookbook/regression && python custom_regression.py)
###Output
_____no_output_____
###Markdown
Отдельный файлСкрипт обновления может быть запущен на отдельном файле Python:
###Code
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
###Output
_____no_output_____
###Markdown
Скрипт выведет ошибки если не сможет найти исправления для кода. Дерево каталогов Типичные проекты, включая этот простой пример, используют более одного файла. Обычно хочется обновить весь пакет, поэтому скрипт может быть также запущен на дереве каталогов:
###Code
# обновить файлы .py и скопировать остальные файлы в outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание на одно замечание по поводу функции `dataset.make_one_shot_iterator`.Сейчас скрипт работает с TensorFlow 2.0:Обратите внимание, что из-за модуля `tf.compat.v1`, сконвертированный скрипт также будет запускаться в TensorFlow 1.14.
###Code
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
###Output
_____no_output_____
###Markdown
Детальный отчетСкрипт также публикует подробный список изменений. В этом примере он нашел одну возможно небезопасную трансформацию и добавил предупреждение в начало файла:
###Code
!head -n 20 tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание вновь на одно замечание о `Dataset.make_one_shot_iterator function`. В остальных случаях результат объяснит причину для нетривиальных изменений:
###Code
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
###Output
_____no_output_____
###Markdown
Вот измененное содержимое файла, обратите внимание, как скрипт добавляет имена аргументов для работы с перемещенными и переименованными аргументами:
###Code
!cat dropout_v2.py
###Output
_____no_output_____
###Markdown
Больший проект может содержать мало ошибок. Например, конвертируеме модель deeplab:
###Code
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
###Output
_____no_output_____
###Markdown
Это сгенерировало выходные файлы:
###Code
!ls deeplab_v2
###Output
_____no_output_____
###Markdown
Но там были ошибки. Отчет поможет вам точно определить, что нужно исправить, прежде чем запускать скрипт. Вот первые три ошибки:
###Code
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
###Output
_____no_output_____
###Markdown
"Безопасный" режим У скрипт конвертации есть также менее инвазивный `БЕЗОПАСНЫЙ` режим который просто меняет импорты для использования модуля `tensorflow.compat.v1`:
###Code
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Автоматическое обновление кода до TensorFlow 2 Смотрите на TensorFlow.org Запустите в Google Colab Изучайте код на GitHub Скачайте ноутбук Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). TensorFlow 2.0 включает много изменений API, таких как изменение порядка аргументов, переименование символов и изменение значений по умолчанию для параметров. Ручное исправление всех этих модификаций утомительно и подвержено ошибкам. Чтобы упростить изменения и сделать ваш переход на TF 2.0 как можно более плавным, команда TensorFlow создала утилиту `tf_upgrade_v2`, помогающую перейти от legacy кода к новому API.Примечание: `tf_upgrade_v2` устанавливается автоматически для TensorFlow 1.13 и более поздних версий (включая все сборки TF 2.0).Типичное использование выглядит так:tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txtЭто ускорит процесс обновления за счет конвертации существующих скриптов TensorFlow 1.x Python в TensorFlow 2.0.Скрипт конвертации максимально автоматизирует процесс, но все еще существуют синтаксические и стилистические изменения, которые не могут быть выполнены скриптом. Модули совместимостиНекоторые символы API не могут быть обновлены просто с использованием замены строк. Чтобы гарантировать поддержку вашего кода в TensorFlow 2.0, скрипт обновления включает в себя модуль `compat.v1`. Этот модуль заменяет символы TF 1.x, такие как `tf.foo`, на эквивалентную ссылку` tf.compat.v1.foo`. Хотя модуль совместимости хорош, мы рекомендуем вам вручную вычитать замены и перенести их на новые API в пространстве имен `tf. *` вместо пространства имен `tf.compat.v1` как можно быстрее.Из-за депрекации модулей TensorFlow 2.x (например, `tf.flags` и` tf.contrib`) некоторые изменения не могут быть обойдены путем переключения на `compat.v1`. Обновление этого кода может потребовать использования дополнительной библиотеки (например, [`absl.flags`] (https://github.com/abseil/abseil-py)) или переключения на пакет в [tenorflow / addons] (http: //www.github.com/tensorflow/addons). Рекомендуемый процесс обновленияОставшаяся часть руководства демонстрирует использование скрипта обновления. Хоть скрипт обновления прост в использовании, очень рекомендуем вам использовать скрипт как часть следующего процесса: 1. ** Модульное тестирование **: убедитесь, что в обновляемом коде имеется набор модульных тестов с разумным охватом. Это код Python, поэтому язык не защитит вас от многих классов ошибок. Также убедитесь, что все ваши зависимости были обновлены до совместимых с TensorFlow 2.0. 1. **Установите TensorFlow 1.14**: Обновите ваш TensorFlow до последней версии TensorFlow 1.x, как минимум 1.14. Она включает финальный API TensorFlow 2.0 в `tf.compat.v2`.1. **Протестируйте с 1.14**: Убедитесь, что ваши модульные тесты проходят на этом этапе. Вы будете повторно запускать их в процессе обновления поэтому важно начать с зеленого цвета.1. **Запустите скрипт обновления**: Запустите `tf_upgrade_v2` на всем дереве исходного кода включая тесты. Это обновит ваш код до формата в котором он использует только символы доступные в TensorFlow 2.0. Устаревшие символы будут доступны с `tf.compat.v1`. Это впоследствии потребует ручного внимания, но не сразу.1. **Запустите ковертированные тесты с TensorFlow 1.14**: Ваш код должен все еще запускаться правильно в TensorFlow 1.14. Запустите снова модульные тесты. Любая ошибка в ваших тестах на этом этапе значит, что в скрипте обновления есть ошибка. [Сообщите нам пожалуйста об этом](https://github.com/tensorflow/tensorflow/issues).1. **Проверьте отчет обновления на наличие предупреждений и ошибок**: Скрипт пишет файл отчета объясняющий все конвертации которые вам нужно перепроверить, или все действия которые нужно совершить вручную. Например: Любые оставшиеся экземпляры contrib требуют ручного удаления. Пожалуйста, обратитесь к [RFC для получения дополнительных инструкций](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Установите TensorFlow 2.0**: В этом месте переключение на TensorFlow 2.0 должно быть безопасно.1. **Протестируйте с `v1.disable_v2_behavior`**: Перезапустите ваши тесты с `v1.disable_v2_behavior ()` в основной функции тестов результаты должны быть те же, что и при запуске под 1.14.1. **Включите V2 Behavior**: Сейчас, когда ваши тесты работают с использованием API v2, вы можете начать смотреть включение v2 behavior. В зависимости от того, как написан ваш код, это может потребовать некоторых изменений. См. [Руководство по миграции] (migrate.ipynb) для деталей. Использование скрипта обновления УстановкаПеред началом убедитесь, что TensorlFlow 2.0 установлен.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Склонируйте git репозиторий [tensorflow/models](https://github.com/tensorflow/models) чтобы у вас был какой-нибудь код для проверки:
###Code
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Прочитайте helpСкрипт должен быть установлен с TensorFlow. Здесь встроенная помощь:
###Code
!tf_upgrade_v2 -h
###Output
_____no_output_____
###Markdown
Пример кода TF1 Здесь простой скрипт TensorFlow 1.0:
###Code
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
###Output
_____no_output_____
###Markdown
С установленным TensorFlow 2.0 он не запускается:
###Code
!(cd models/samples/cookbook/regression && python custom_regression.py)
###Output
_____no_output_____
###Markdown
Отдельный файлСкрипт обновления может быть запущен на отдельном файле Python:
###Code
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
###Output
_____no_output_____
###Markdown
Скрипт выведет ошибки если не сможет найти исправления для кода. Дерево каталогов Типичные проекты, включая этот простой пример, используют более одного файла. Обычно хочется обновить весь пакет, поэтому скрипт может быть также запущен на дереве каталогов:
###Code
# обновить файлы .py и скопировать остальные файлы в outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание на одно замечание по поводу функции `dataset.make_one_shot_iterator`.Сейчас скрипт работает с TensorFlow 2.0:Обратите внимание, что из-за модуля `tf.compat.v1`, сконвертированный скрипт также будет запускаться в TensorFlow 1.14.
###Code
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
###Output
_____no_output_____
###Markdown
Детальный отчетСкрипт также публикует подробный список изменений. В этом примере он нашел одну возможно небезопасную трансформацию и добавил предупреждение в начало файла:
###Code
!head -n 20 tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание вновь на одно замечание о `Dataset.make_one_shot_iterator function`. В остальных случаях результат объяснит причину для нетривиальных изменений:
###Code
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
###Output
_____no_output_____
###Markdown
Вот измененное содержимое файла, обратите внимание, как скрипт добавляет имена аргументов для работы с перемещенными и переименованными аргументами:
###Code
!cat dropout_v2.py
###Output
_____no_output_____
###Markdown
Больший проект может содержать мало ошибок. Например, конвертируеме модель deeplab:
###Code
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
###Output
_____no_output_____
###Markdown
Это сгенерировало выходные файлы:
###Code
!ls deeplab_v2
###Output
_____no_output_____
###Markdown
Но там были ошибки. Отчет поможет вам точно определить, что нужно исправить, прежде чем запускать скрипт. Вот первые три ошибки:
###Code
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
###Output
_____no_output_____
###Markdown
"Безопасный" режим У скрипт конвертации есть также менее инвазивный `БЕЗОПАСНЫЙ` режим который просто меняет импорты для использования модуля `tensorflow.compat.v1`:
###Code
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Автоматическое обновление кода до TensorFlow 2 Смотрите на TensorFlow.org Запустите в Google Colab Изучайте код на GitHub Скачайте ноутбук Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). TensorFlow 2.0 включает много изменений API, таких как изменение порядка аргументов, переименование символов и изменение значений по умолчанию для параметров. Ручное исправление всех этих модификаций утомительно и подвержено ошибкам. Чтобы упростить изменения и сделать ваш переход на TF 2.0 как можно более плавным, команда TensorFlow создала утилиту `tf_upgrade_v2`, помогающую перейти от legacy кода к новому API.Примечание: `tf_upgrade_v2` устанавливается автоматически для TensorFlow 1.13 и более поздних версий (включая все сборки TF 2.0).Типичное использование выглядит так:tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txtЭто ускорит процесс обновления за счет конвертации существующих скриптов TensorFlow 1.x Python в TensorFlow 2.0.Скрипт конвертации максимально автоматизирует процесс, но все еще существуют синтаксические и стилистические изменения, которые не могут быть выполнены скриптом. Модули совместимостиНекоторые символы API не могут быть обновлены просто с использованием замены строк. Чтобы гарантировать поддержку вашего кода в TensorFlow 2.0, скрипт обновления включает в себя модуль `compat.v1`. Этот модуль заменяет символы TF 1.x, такие как `tf.foo`, на эквивалентную ссылку` tf.compat.v1.foo`. Хотя модуль совместимости хорош, мы рекомендуем вам вручную вычитать замены и перенести их на новые API в пространстве имен `tf. *` вместо пространства имен `tf.compat.v1` как можно быстрее.Из-за депрекации модулей TensorFlow 2.x (например, `tf.flags` и` tf.contrib`) некоторые изменения не могут быть обойдены путем переключения на `compat.v1`. Обновление этого кода может потребовать использования дополнительной библиотеки (например, [`absl.flags`] (https://github.com/abseil/abseil-py)) или переключения на пакет в [tenorflow / addons] (http: //www.github.com/tensorflow/addons). Рекомендуемый процесс обновленияОставшаяся часть руководства демонстрирует использование скрипта обновления. Хоть скрипт обновления прост в использовании, очень рекомендуем вам использовать скрипт как часть следующего процесса: 1. ** Модульное тестирование **: убедитесь, что в обновляемом коде имеется набор модульных тестов с разумным охватом. Это код Python, поэтому язык не защитит вас от многих классов ошибок. Также убедитесь, что все ваши зависимости были обновлены до совместимых с TensorFlow 2.0. 1. **Установите TensorFlow 1.14**: Обновите ваш TensorFlow до последней версии TensorFlow 1.x, как минимум 1.14. Она включает финальный API TensorFlow 2.0 в `tf.compat.v2`.1. **Протестируйте с 1.14**: Убедитесь, что ваши модульные тесты проходят на этом этапе. Вы будете повторно запускать их в процессе обновления поэтому важно начать с зеленого цвета.1. **Запустите скрипт обновления**: Запустите `tf_upgrade_v2` на всем дереве исходного кода включая тесты. Это обновит ваш код до формата в котором он использует только символы доступные в TensorFlow 2.0. Устаревшие символы будут доступны с `tf.compat.v1`. Это впоследствии потребует ручного внимания, но не сразу.1. **Запустите ковертированные тесты с TensorFlow 1.14**: Ваш код должен все еще запускаться правильно в TensorFlow 1.14. Запустите снова модульные тесты. Любая ошибка в ваших тестах на этом этапе значит, что в скрипте обновления есть ошибка. [Сообщите нам пожалуйста об этом](https://github.com/tensorflow/tensorflow/issues).1. **Проверьте отчет обновления на наличие предупреждений и ошибок**: Скрипт пишет файл отчета объясняющий все конвертации которые вам нужно перепроверить, или все действия которые нужно совершить вручную. Например: Любые оставшиеся экземпляры contrib требуют ручного удаления. Пожалуйста, обратитесь к [RFC для получения дополнительных инструкций](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Установите TensorFlow 2.0**: В этом месте переключение на TensorFlow 2.0 должно быть безопасно.1. **Протестируйте с `v1.disable_v2_behavior`**: Перезапустите ваши тесты с `v1.disable_v2_behavior ()` в основной функции тестов результаты должны быть те же, что и при запуске под 1.14.1. **Включите V2 Behavior**: Сейчас, когда ваши тесты работают с использованием API v2, вы можете начать смотреть включение v2 behavior. В зависимости от того, как написан ваш код, это может потребовать некоторых изменений. См. [Руководство по миграции] (migrate.ipynb) для деталей. Использование скрипта обновления УстановкаПеред началом убедитесь, что TensorlFlow 2.0 установлен.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Склонируйте git репозиторий [tensorflow/models](https://github.com/tensorflow/models) чтобы у вас был какой-нибудь код для проверки:
###Code
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Прочитайте helpСкрипт должен быть установлен с TensorFlow. Здесь встроенная помощь:
###Code
!tf_upgrade_v2 -h
###Output
_____no_output_____
###Markdown
Пример кода TF1 Здесь простой скрипт TensorFlow 1.0:
###Code
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
###Output
_____no_output_____
###Markdown
С установленным TensorFlow 2.0 он не запускается:
###Code
!(cd models/samples/cookbook/regression && python custom_regression.py)
###Output
_____no_output_____
###Markdown
Отдельный файлСкрипт обновления может быть запущен на отдельном файле Python:
###Code
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
###Output
_____no_output_____
###Markdown
Скрипт выведет ошибки если не сможет найти исправления для кода. Дерево каталогов Типичные проекты, включая этот простой пример, используют более одного файла. Обычно хочется обновить весь пакет, поэтому скрипт может быть также запущен на дереве каталогов:
###Code
# обновить файлы .py и скопировать остальные файлы в outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание на одно замечание по поводу функции `dataset.make_one_shot_iterator`.Сейчас скрипт работает с TensorFlow 2.0:Обратите внимание, что из-за модуля `tf.compat.v1`, сконвертированный скрипт также будет запускаться в TensorFlow 1.14.
###Code
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
###Output
_____no_output_____
###Markdown
Детальный отчетСкрипт также публикует подробный список изменений. В этом примере он нашел одну возможно небезопасную трансформацию и добавил предупреждение в начало файла:
###Code
!head -n 20 tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание вновь на одно замечание о `Dataset.make_one_shot_iterator function`. В остальных случаях результат объяснит причину для нетривиальных изменений:
###Code
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
###Output
_____no_output_____
###Markdown
Вот измененное содержимое файла, обратите внимание, как скрипт добавляет имена аргументов для работы с перемещенными и переименованными аргументами:
###Code
!cat dropout_v2.py
###Output
_____no_output_____
###Markdown
Больший проект может содержать мало ошибок. Например, конвертируеме модель deeplab:
###Code
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
###Output
_____no_output_____
###Markdown
Это сгенерировало выходные файлы:
###Code
!ls deeplab_v2
###Output
_____no_output_____
###Markdown
Но там были ошибки. Отчет поможет вам точно определить, что нужно исправить, прежде чем запускать скрипт. Вот первые три ошибки:
###Code
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
###Output
_____no_output_____
###Markdown
"Безопасный" режим У скрипт конвертации есть также менее инвазивный `БЕЗОПАСНЫЙ` режим который просто меняет импорты для использования модуля `tensorflow.compat.v1`:
###Code
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Автоматическое обновление кода до TensorFlow 2 Смотрите на TensorFlow.org Запустите в Google Colab Изучайте код на GitHub Скачайте ноутбук Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). TensorFlow 2.0 включает много изменений API, таких как изменение порядка аргументов, переименование символов и изменение значений по умолчанию для параметров. Ручное исправление всех этих модификаций утомительно и подвержено ошибкам. Чтобы упростить изменения и сделать ваш переход на TF 2.0 как можно более плавным, команда TensorFlow создала утилиту `tf_upgrade_v2`, помогающую перейти от legacy кода к новому API.Примечание: `tf_upgrade_v2` устанавливается автоматически для TensorFlow 1.13 и более поздних версий (включая все сборки TF 2.0).Типичное использование выглядит так:tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txtЭто ускорит процесс обновления за счет конвертации существующих скриптов TensorFlow 1.x Python в TensorFlow 2.0.Скрипт конвертации максимально автоматизирует процесс, но все еще существуют синтаксические и стилистические изменения, которые не могут быть выполнены скриптом. Модули совместимостиНекоторые символы API не могут быть обновлены просто с использованием замены строк. Чтобы гарантировать поддержку вашего кода в TensorFlow 2.0, скрипт обновления включает в себя модуль `compat.v1`. Этот модуль заменяет символы TF 1.x, такие как `tf.foo`, на эквивалентную ссылку` tf.compat.v1.foo`. Хотя модуль совместимости хорош, мы рекомендуем вам вручную вычитать замены и перенести их на новые API в пространстве имен `tf. *` вместо пространства имен `tf.compat.v1` как можно быстрее.Из-за депрекации модулей TensorFlow 2.x (например, `tf.flags` и` tf.contrib`) некоторые изменения не могут быть обойдены путем переключения на `compat.v1`. Обновление этого кода может потребовать использования дополнительной библиотеки (например, [`absl.flags`] (https://github.com/abseil/abseil-py)) или переключения на пакет в [tenorflow / addons] (http: //www.github.com/tensorflow/addons). Рекомендуемый процесс обновленияОставшаяся часть руководства демонстрирует использование скрипта обновления. Хоть скрипт обновления прост в использовании, очень рекомендуем вам использовать скрипт как часть следующего процесса: 1. ** Модульное тестирование **: убедитесь, что в обновляемом коде имеется набор модульных тестов с разумным охватом. Это код Python, поэтому язык не защитит вас от многих классов ошибок. Также убедитесь, что все ваши зависимости были обновлены до совместимых с TensorFlow 2.0. 1. **Установите TensorFlow 1.14**: Обновите ваш TensorFlow до последней версии TensorFlow 1.x, как минимум 1.14. Она включает финальный API TensorFlow 2.0 в `tf.compat.v2`.1. **Протестируйте с 1.14**: Убедитесь, что ваши модульные тесты проходят на этом этапе. Вы будете повторно запускать их в процессе обновления поэтому важно начать с зеленого цвета.1. **Запустите скрипт обновления**: Запустите `tf_upgrade_v2` на всем дереве исходного кода включая тесты. Это обновит ваш код до формата в котором он использует только символы доступные в TensorFlow 2.0. Устаревшие символы будут доступны с `tf.compat.v1`. Это впоследствии потребует ручного внимания, но не сразу.1. **Запустите ковертированные тесты с TensorFlow 1.14**: Ваш код должен все еще запускаться правильно в TensorFlow 1.14. Запустите снова модульные тесты. Любая ошибка в ваших тестах на этом этапе значит, что в скрипте обновления есть ошибка. [Сообщите нам пожалуйста об этом](https://github.com/tensorflow/tensorflow/issues).1. **Проверьте отчет обновления на наличие предупреждений и ошибок**: Скрипт пишет файл отчета объясняющий все конвертации которые вам нужно перепроверить, или все действия которые нужно совершить вручную. Например: Любые оставшиеся экземпляры contrib требуют ручного удаления. Пожалуйста, обратитесь к [RFC для получения дополнительных инструкций](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Установите TensorFlow 2.0**: В этом месте переключение на TensorFlow 2.0 должно быть безопасно.1. **Протестируйте с `v1.disable_v2_behavior`**: Перезапустите ваши тесты с `v1.disable_v2_behavior ()` в основной функции тестов результаты должны быть те же, что и при запуске под 1.14.1. **Включите V2 Behavior**: Сейчас, когда ваши тесты работают с использованием API v2, вы можете начать смотреть включение v2 behavior. В зависимости от того, как написан ваш код, это может потребовать некоторых изменений. См. [Руководство по миграции] (migrate.ipynb) для деталей. Использование скрипта обновления УстановкаПеред началом убедитесь, что TensorlFlow 2.0 установлен.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Склонируйте git репозиторий [tensorflow/models](https://github.com/tensorflow/models) чтобы у вас был какой-нибудь код для проверки:
###Code
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Прочитайте helpСкрипт должен быть установлен с TensorFlow. Здесь встроенная помощь:
###Code
!tf_upgrade_v2 -h
###Output
_____no_output_____
###Markdown
Пример кода TF1 Здесь простой скрипт TensorFlow 1.0:
###Code
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
###Output
_____no_output_____
###Markdown
С установленным TensorFlow 2.0 он не запускается:
###Code
!(cd models/samples/cookbook/regression && python custom_regression.py)
###Output
_____no_output_____
###Markdown
Отдельный файлСкрипт обновления может быть запущен на отдельном файле Python:
###Code
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
###Output
_____no_output_____
###Markdown
Скрипт выведет ошибки если не сможет найти исправления для кода. Дерево каталогов Типичные проекты, включая этот простой пример, используют более одного файла. Обычно хочется обновить весь пакет, поэтому скрипт может быть также запущен на дереве каталогов:
###Code
# обновить файлы .py и скопировать остальные файлы в outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание на одно замечание по поводу функции `dataset.make_one_shot_iterator`.Сейчас скрипт работает с TensorFlow 2.0:Обратите внимание, что из-за модуля `tf.compat.v1`, сконвертированный скрипт также будет запускаться в TensorFlow 1.14.
###Code
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
###Output
_____no_output_____
###Markdown
Детальный отчетСкрипт также публикует подробный список изменений. В этом примере он нашел одну возможно небезопасную трансформацию и добавил предупреждение в начало файла:
###Code
!head -n 20 tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание вновь на одно замечание о `Dataset.make_one_shot_iterator function`. В остальных случаях результат объяснит причину для нетривиальных изменений:
###Code
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
###Output
_____no_output_____
###Markdown
Вот измененное содержимое файла, обратите внимание, как скрипт добавляет имена аргументов для работы с перемещенными и переименованными аргументами:
###Code
!cat dropout_v2.py
###Output
_____no_output_____
###Markdown
Больший проект может содержать мало ошибок. Например, конвертируеме модель deeplab:
###Code
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
###Output
_____no_output_____
###Markdown
Это сгенерировало выходные файлы:
###Code
!ls deeplab_v2
###Output
_____no_output_____
###Markdown
Но там были ошибки. Отчет поможет вам точно определить, что нужно исправить, прежде чем запускать скрипт. Вот первые три ошибки:
###Code
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
###Output
_____no_output_____
###Markdown
"Безопасный" режим У скрипт конвертации есть также менее инвазивный `БЕЗОПАСНЫЙ` режим который просто меняет импорты для использования модуля `tensorflow.compat.v1`:
###Code
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Автоматическое обновление кода до TensorFlow 2 Смотрите на TensorFlow.org Запустите в Google Colab Изучайте код на GitHub Скачайте ноутбук Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). TensorFlow 2.0 включает много изменений API, таких как изменение порядка аргументов, переименование символов и изменение значений по умолчанию для параметров. Ручное исправление всех этих модификаций утомительно и подвержено ошибкам. Чтобы упростить изменения и сделать ваш переход на TF 2.0 как можно более плавным, команда TensorFlow создала утилиту `tf_upgrade_v2`, помогающую перейти от legacy кода к новому API.Примечание: `tf_upgrade_v2` устанавливается автоматически для TensorFlow 1.13 и более поздних версий (включая все сборки TF 2.0).Типичное использование выглядит так:tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txtЭто ускорит процесс обновления за счет конвертации существующих скриптов TensorFlow 1.x Python в TensorFlow 2.0.Скрипт конвертации максимально автоматизирует процесс, но все еще существуют синтаксические и стилистические изменения, которые не могут быть выполнены скриптом. Модули совместимостиНекоторые символы API не могут быть обновлены просто с использованием замены строк. Чтобы гарантировать поддержку вашего кода в TensorFlow 2.0, скрипт обновления включает в себя модуль `compat.v1`. Этот модуль заменяет символы TF 1.x, такие как `tf.foo`, на эквивалентную ссылку` tf.compat.v1.foo`. Хотя модуль совместимости хорош, мы рекомендуем вам вручную вычитать замены и перенести их на новые API в пространстве имен `tf. *` вместо пространства имен `tf.compat.v1` как можно быстрее.Из-за депрекации модулей TensorFlow 2.x (например, `tf.flags` и` tf.contrib`) некоторые изменения не могут быть обойдены путем переключения на `compat.v1`. Обновление этого кода может потребовать использования дополнительной библиотеки (например, [`absl.flags`] (https://github.com/abseil/abseil-py)) или переключения на пакет в [tenorflow / addons] (http: //www.github.com/tensorflow/addons). Рекомендуемый процесс обновленияОставшаяся часть руководства демонстрирует использование скрипта обновления. Хоть скрипт обновления прост в использовании, очень рекомендуем вам использовать скрипт как часть следующего процесса: 1. ** Модульное тестирование **: убедитесь, что в обновляемом коде имеется набор модульных тестов с разумным охватом. Это код Python, поэтому язык не защитит вас от многих классов ошибок. Также убедитесь, что все ваши зависимости были обновлены до совместимых с TensorFlow 2.0. 1. **Установите TensorFlow 1.14**: Обновите ваш TensorFlow до последней версии TensorFlow 1.x, как минимум 1.14. Она включает финальный API TensorFlow 2.0 в `tf.compat.v2`.1. **Протестируйте с 1.14**: Убедитесь, что ваши модульные тесты проходят на этом этапе. Вы будете повторно запускать их в процессе обновления поэтому важно начать с зеленого цвета.1. **Запустите скрипт обновления**: Запустите `tf_upgrade_v2` на всем дереве исходного кода включая тесты. Это обновит ваш код до формата в котором он использует только символы доступные в TensorFlow 2.0. Устаревшие символы будут доступны с `tf.compat.v1`. Это впоследствии потребует ручного внимания, но не сразу.1. **Запустите ковертированные тесты с TensorFlow 1.14**: Ваш код должен все еще запускаться правильно в TensorFlow 1.14. Запустите снова модульные тесты. Любая ошибка в ваших тестах на этом этапе значит, что в скрипте обновления есть ошибка. [Сообщите нам пожалуйста об этом](https://github.com/tensorflow/tensorflow/issues).1. **Проверьте отчет обновления на наличие предупреждений и ошибок**: Скрипт пишет файл отчета объясняющий все конвертации которые вам нужно перепроверить, или все действия которые нужно совершить вручную. Например: Любые оставшиеся экземпляры contrib требуют ручного удаления. Пожалуйста, обратитесь к [RFC для получения дополнительных инструкций](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Установите TensorFlow 2.0**: В этом месте переключение на TensorFlow 2.0 должно быть безопасно.1. **Протестируйте с `v1.disable_v2_behavior`**: Перезапустите ваши тесты с `v1.disable_v2_behavior ()` в основной функции тестов результаты должны быть те же, что и при запуске под 1.14.1. **Включите V2 Behavior**: Сейчас, когда ваши тесты работают с использованием API v2, вы можете начать смотреть включение v2 behavior. В зависимости от того, как написан ваш код, это может потребовать некоторых изменений. См. [Руководство по миграции] (migrate.ipynb) для деталей. Использование скрипта обновления УстановкаПеред началом убедитесь, что TensorlFlow 2.0 установлен.
###Code
try:
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Склонируйте git репозиторий [tensorflow/models](https://github.com/tensorflow/models) чтобы у вас был какой-нибудь код для проверки:
###Code
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Прочитайте helpСкрипт должен быть установлен с TensorFlow. Здесь встроенная помощь:
###Code
!tf_upgrade_v2 -h
###Output
_____no_output_____
###Markdown
Пример кода TF1 Здесь простой скрипт TensorFlow 1.0:
###Code
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
###Output
_____no_output_____
###Markdown
С установленным TensorFlow 2.0 он не запускается:
###Code
!(cd models/samples/cookbook/regression && python custom_regression.py)
###Output
_____no_output_____
###Markdown
Отдельный файлСкрипт обновления может быть запущен на отдельном файле Python:
###Code
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
###Output
_____no_output_____
###Markdown
Скрипт выведет ошибки если не сможет найти исправления для кода. Дерево каталогов Типичные проекты, включая этот простой пример, используют более одного файла. Обычно хочется обновить весь пакет, поэтому скрипт может быть также запущен на дереве каталогов:
###Code
# обновить файлы .py и скопировать остальные файлы в outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание на одно замечание по поводу функции `dataset.make_one_shot_iterator`.Сейчас скрипт работает с TensorFlow 2.0:Обратите внимание, что из-за модуля `tf.compat.v1`, сконвертированный скрипт также будет запускаться в TensorFlow 1.14.
###Code
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
###Output
_____no_output_____
###Markdown
Детальный отчетСкрипт также публикует подробный список изменений. В этом примере он нашел одну возможно небезопасную трансформацию и добавил предупреждение в начало файла:
###Code
!head -n 20 tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание вновь на одно замечание о `Dataset.make_one_shot_iterator function`. В остальных случаях результат объяснит причину для нетривиальных изменений:
###Code
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
###Output
_____no_output_____
###Markdown
Вот измененное содержимое файла, обратите внимание, как скрипт добавляет имена аргументов для работы с перемещенными и переименованными аргументами:
###Code
!cat dropout_v2.py
###Output
_____no_output_____
###Markdown
Больший проект может содержать мало ошибок. Например, конвертируеме модель deeplab:
###Code
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
###Output
_____no_output_____
###Markdown
Это сгенерировало выходные файлы:
###Code
!ls deeplab_v2
###Output
_____no_output_____
###Markdown
Но там были ошибки. Отчет поможет вам точно определить, что нужно исправить, прежде чем запускать скрипт. Вот первые три ошибки:
###Code
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
###Output
_____no_output_____
###Markdown
"Безопасный" режим У скрипт конвертации есть также менее инвазивный `БЕЗОПАСНЫЙ` режим который просто меняет импорты для использования модуля `tensorflow.compat.v1`:
###Code
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Автоматическое обновление кода до TensorFlow 2 Смотрите на TensorFlow.org Запустите в Google Colab Изучайте код на GitHub Скачайте ноутбук Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). TensorFlow 2.0 включает много изменений API, таких как изменение порядка аргументов, переименование символов и изменение значений по умолчанию для параметров. Ручное исправление всех этих модификаций утомительно и подвержено ошибкам. Чтобы упростить изменения и сделать ваш переход на TF 2.0 как можно более плавным, команда TensorFlow создала утилиту `tf_upgrade_v2`, помогающую перейти от legacy кода к новому API.Примечание: `tf_upgrade_v2` устанавливается автоматически для TensorFlow 1.13 и более поздних версий (включая все сборки TF 2.0).Типичное использование выглядит так:tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txtЭто ускорит процесс обновления за счет конвертации существующих скриптов TensorFlow 1.x Python в TensorFlow 2.0.Скрипт конвертации максимально автоматизирует процесс, но все еще существуют синтаксические и стилистические изменения, которые не могут быть выполнены скриптом. Модули совместимостиНекоторые символы API не могут быть обновлены просто с использованием замены строк. Чтобы гарантировать поддержку вашего кода в TensorFlow 2.0, скрипт обновления включает в себя модуль `compat.v1`. Этот модуль заменяет символы TF 1.x, такие как `tf.foo`, на эквивалентную ссылку` tf.compat.v1.foo`. Хотя модуль совместимости хорош, мы рекомендуем вам вручную вычитать замены и перенести их на новые API в пространстве имен `tf. *` вместо пространства имен `tf.compat.v1` как можно быстрее.Из-за удаления некоторых модулей из TensorFlow 2.x (например, `tf.flags` и` tf.contrib`) некоторые изменения не могут быть обойдены путем переключения на `compat.v1`. Обновление этого кода может потребовать использования дополнительной библиотеки (например, [`absl.flags`](https://github.com/abseil/abseil-py)) или переключения на пакет в [tenorflow / addons](http://www.github.com/tensorflow/addons). Рекомендуемый процесс обновленияОставшаяся часть руководства демонстрирует использование скрипта обновления. Хоть скрипт обновления прост в использовании, очень рекомендуем вам использовать скрипт как часть следующего процесса: 1. **Модульное тестирование**: убедитесь, что в обновляемом коде имеется набор модульных тестов с разумным охватом. Это код Python, поэтому язык не защитит вас от многих классов ошибок. Также убедитесь, что все ваши зависимости были обновлены до совместимых с TensorFlow 2.0. 1. **Установите TensorFlow 1.14**: Обновите ваш TensorFlow до последней версии TensorFlow 1.x, как минимум 1.14. Она включает финальный API TensorFlow 2.0 в `tf.compat.v2`.1. **Протестируйте с 1.14**: Убедитесь, что ваши модульные тесты проходят на этом этапе. Вы будете повторно запускать их в процессе обновления поэтому важно начать с зеленого цвета.1. **Запустите скрипт обновления**: Запустите `tf_upgrade_v2` на всем дереве исходного кода включая тесты. Это обновит ваш код до формата в котором он использует только символы доступные в TensorFlow 2.0. Устаревшие символы будут доступны с `tf.compat.v1`. Это впоследствии потребует ручного внимания, но не сразу.1. **Запустите конвертированные тесты с TensorFlow 1.14**: Ваш код должен все еще запускаться правильно в TensorFlow 1.14. Запустите снова модульные тесты. Любая ошибка в ваших тестах на этом этапе значит, что в скрипте обновления есть ошибка. [Сообщите нам пожалуйста об этом](https://github.com/tensorflow/tensorflow/issues).1. **Проверьте отчет обновления на наличие предупреждений и ошибок**: Скрипт пишет файл отчета объясняющий все конвертации которые вам нужно перепроверить, или все действия которые нужно совершить вручную. Например: Любые оставшиеся экземпляры contrib требуют ручного удаления. Пожалуйста, обратитесь к [RFC для получения дополнительных инструкций](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Установите TensorFlow 2.0**: В этом месте переключение на TensorFlow 2.0 должно быть безопасно.1. **Протестируйте с `v1.disable_v2_behavior`**: Перезапустите ваши тесты с `v1.disable_v2_behavior ()` в основной функции тестов результаты должны быть те же, что и при запуске под 1.14.1. **Включите V2 Behavior**: Сейчас, когда ваши тесты работают с использованием API v2, вы можете начать смотреть включение v2 behavior. В зависимости от того, как написан ваш код, это может потребовать некоторых изменений. См. [Руководство по миграции](migrate.ipynb) для деталей. Использование скрипта обновления УстановкаПеред началом убедитесь, что TensorlFlow 2.0 установлен.
###Code
try:
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Склонируйте git репозиторий [tensorflow/models](https://github.com/tensorflow/models) чтобы у вас был какой-нибудь код для проверки:
###Code
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Прочитайте helpСкрипт должен быть установлен с TensorFlow. Здесь встроенная помощь:
###Code
!tf_upgrade_v2 -h
###Output
_____no_output_____
###Markdown
Пример кода TF1 Здесь простой скрипт TensorFlow 1.0:
###Code
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
###Output
_____no_output_____
###Markdown
С установленным TensorFlow 2.0 он не запускается:
###Code
!(cd models/samples/cookbook/regression && python custom_regression.py)
###Output
_____no_output_____
###Markdown
Отдельный файлСкрипт обновления может быть запущен на отдельном файле Python:
###Code
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
###Output
_____no_output_____
###Markdown
Скрипт выведет ошибки если не сможет найти исправления для кода. Дерево каталогов Типичные проекты, включая этот простой пример, используют более одного файла. Обычно хочется обновить весь пакет, поэтому скрипт может быть также запущен на дереве каталогов:
###Code
# обновить файлы .py и скопировать остальные файлы в outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание на одно замечание по поводу функции `dataset.make_one_shot_iterator`.Сейчас скрипт работает с TensorFlow 2.0:Обратите внимание, что из-за модуля `tf.compat.v1`, сконвертированный скрипт также будет запускаться в TensorFlow 1.14.
###Code
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
###Output
_____no_output_____
###Markdown
Детальный отчетСкрипт также публикует подробный список изменений. В этом примере он нашел одну возможно небезопасную трансформацию и добавил предупреждение в начало файла:
###Code
!head -n 20 tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание вновь на одно замечание о `Dataset.make_one_shot_iterator function`. В остальных случаях результат объяснит причину для нетривиальных изменений:
###Code
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
###Output
_____no_output_____
###Markdown
Вот измененное содержимое файла, обратите внимание, как скрипт добавляет имена аргументов для работы с перемещенными и переименованными аргументами:
###Code
!cat dropout_v2.py
###Output
_____no_output_____
###Markdown
Больший проект может содержать мало ошибок. Например, конвертируеме модель deeplab:
###Code
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
###Output
_____no_output_____
###Markdown
Это сгенерировало выходные файлы:
###Code
!ls deeplab_v2
###Output
_____no_output_____
###Markdown
Но там были ошибки. Отчет поможет вам точно определить, что нужно исправить, прежде чем запускать скрипт. Вот первые три ошибки:
###Code
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
###Output
_____no_output_____
###Markdown
"Безопасный" режим У скрипт конвертации есть также менее инвазивный `БЕЗОПАСНЫЙ` режим который просто меняет импорты для использования модуля `tensorflow.compat.v1`:
###Code
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Автоматическое обновление кода до TensorFlow 2 Смотрите на TensorFlow.org Запустите в Google Colab Изучайте код на GitHub Скачайте ноутбук Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). TensorFlow 2.0 включает много изменений API, таких как изменение порядка аргументов, переименование символов и изменение значений по умолчанию для параметров. Ручное исправление всех этих модификаций утомительно и подвержено ошибкам. Чтобы упростить изменения и сделать ваш переход на TF 2.0 как можно более плавным, команда TensorFlow создала утилиту `tf_upgrade_v2`, помогающую перейти от legacy кода к новому API.Примечание: `tf_upgrade_v2` устанавливается автоматически для TensorFlow 1.13 и более поздних версий (включая все сборки TF 2.0).Типичное использование выглядит так:tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txtЭто ускорит процесс обновления за счет конвертации существующих скриптов TensorFlow 1.x Python в TensorFlow 2.0.Скрипт конвертации максимально автоматизирует процесс, но все еще существуют синтаксические и стилистические изменения, которые не могут быть выполнены скриптом. Модули совместимостиНекоторые символы API не могут быть обновлены просто с использованием замены строк. Чтобы гарантировать поддержку вашего кода в TensorFlow 2.0, скрипт обновления включает в себя модуль `compat.v1`. Этот модуль заменяет символы TF 1.x, такие как `tf.foo`, на эквивалентную ссылку` tf.compat.v1.foo`. Хотя модуль совместимости хорош, мы рекомендуем вам вручную вычитать замены и перенести их на новые API в пространстве имен `tf. *` вместо пространства имен `tf.compat.v1` как можно быстрее.Из-за депрекации модулей TensorFlow 2.x (например, `tf.flags` и` tf.contrib`) некоторые изменения не могут быть обойдены путем переключения на `compat.v1`. Обновление этого кода может потребовать использования дополнительной библиотеки (например, [`absl.flags`] (https://github.com/abseil/abseil-py)) или переключения на пакет в [tenorflow / addons] (http: //www.github.com/tensorflow/addons). Рекомендуемый процесс обновленияОставшаяся часть руководства демонстрирует использование скрипта обновления. Хоть скрипт обновления прост в использовании, очень рекомендуем вам использовать скрипт как часть следующего процесса: 1. ** Модульное тестирование **: убедитесь, что в обновляемом коде имеется набор модульных тестов с разумным охватом. Это код Python, поэтому язык не защитит вас от многих классов ошибок. Также убедитесь, что все ваши зависимости были обновлены до совместимых с TensorFlow 2.0. 1. **Установите TensorFlow 1.14**: Обновите ваш TensorFlow до последней версии TensorFlow 1.x, как минимум 1.14. Она включает финальный API TensorFlow 2.0 в `tf.compat.v2`.1. **Протестируйте с 1.14**: Убедитесь, что ваши модульные тесты проходят на этом этапе. Вы будете повторно запускать их в процессе обновления поэтому важно начать с зеленого цвета.1. **Запустите скрипт обновления**: Запустите `tf_upgrade_v2` на всем дереве исходного кода включая тесты. Это обновит ваш код до формата в котором он использует только символы доступные в TensorFlow 2.0. Устаревшие символы будут доступны с `tf.compat.v1`. Это впоследствии потребует ручного внимания, но не сразу.1. **Запустите ковертированные тесты с TensorFlow 1.14**: Ваш код должен все еще запускаться правильно в TensorFlow 1.14. Запустите снова модульные тесты. Любая ошибка в ваших тестах на этом этапе значит, что в скрипте обновления есть ошибка. [Сообщите нам пожалуйста об этом](https://github.com/tensorflow/tensorflow/issues).1. **Проверьте отчет обновления на наличие предупреждений и ошибок**: Скрипт пишет файл отчета объясняющий все конвертации которые вам нужно перепроверить, или все действия которые нужно совершить вручную. Например: Любые оставшиеся экземпляры contrib требуют ручного удаления. Пожалуйста, обратитесь к [RFC для получения дополнительных инструкций](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Установите TensorFlow 2.0**: В этом месте переключение на TensorFlow 2.0 должно быть безопасно.1. **Протестируйте с `v1.disable_v2_behavior`**: Перезапустите ваши тесты с `v1.disable_v2_behavior ()` в основной функции тестов результаты должны быть те же, что и при запуске под 1.14.1. **Включите V2 Behavior**: Сейчас, когда ваши тесты работают с использованием API v2, вы можете начать смотреть включение v2 behavior. В зависимости от того, как написан ваш код, это может потребовать некоторых изменений. См. [Руководство по миграции] (migrate.ipynb) для деталей. Использование скрипта обновления УстановкаПеред началом убедитесь, что TensorlFlow 2.0 установлен.
###Code
try:
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Склонируйте git репозиторий [tensorflow/models](https://github.com/tensorflow/models) чтобы у вас был какой-нибудь код для проверки:
###Code
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Прочитайте helpСкрипт должен быть установлен с TensorFlow. Здесь встроенная помощь:
###Code
!tf_upgrade_v2 -h
###Output
_____no_output_____
###Markdown
Пример кода TF1 Здесь простой скрипт TensorFlow 1.0:
###Code
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
###Output
_____no_output_____
###Markdown
С установленным TensorFlow 2.0 он не запускается:
###Code
!(cd models/samples/cookbook/regression && python custom_regression.py)
###Output
_____no_output_____
###Markdown
Отдельный файлСкрипт обновления может быть запущен на отдельном файле Python:
###Code
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
###Output
_____no_output_____
###Markdown
Скрипт выведет ошибки если не сможет найти исправления для кода. Дерево каталогов Типичные проекты, включая этот простой пример, используют более одного файла. Обычно хочется обновить весь пакет, поэтому скрипт может быть также запущен на дереве каталогов:
###Code
# обновить файлы .py и скопировать остальные файлы в outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание на одно замечание по поводу функции `dataset.make_one_shot_iterator`.Сейчас скрипт работает с TensorFlow 2.0:Обратите внимание, что из-за модуля `tf.compat.v1`, сконвертированный скрипт также будет запускаться в TensorFlow 1.14.
###Code
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
###Output
_____no_output_____
###Markdown
Детальный отчетСкрипт также публикует подробный список изменений. В этом примере он нашел одну возможно небезопасную трансформацию и добавил предупреждение в начало файла:
###Code
!head -n 20 tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание вновь на одно замечание о `Dataset.make_one_shot_iterator function`. В остальных случаях результат объяснит причину для нетривиальных изменений:
###Code
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
###Output
_____no_output_____
###Markdown
Вот измененное содержимое файла, обратите внимание, как скрипт добавляет имена аргументов для работы с перемещенными и переименованными аргументами:
###Code
!cat dropout_v2.py
###Output
_____no_output_____
###Markdown
Больший проект может содержать мало ошибок. Например, конвертируеме модель deeplab:
###Code
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
###Output
_____no_output_____
###Markdown
Это сгенерировало выходные файлы:
###Code
!ls deeplab_v2
###Output
_____no_output_____
###Markdown
Но там были ошибки. Отчет поможет вам точно определить, что нужно исправить, прежде чем запускать скрипт. Вот первые три ошибки:
###Code
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
###Output
_____no_output_____
###Markdown
"Безопасный" режим У скрипт конвертации есть также менее инвазивный `БЕЗОПАСНЫЙ` режим который просто меняет импорты для использования модуля `tensorflow.compat.v1`:
###Code
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Автоматическое обновление кода до TensorFlow 2 Смотрите на TensorFlow.org Запустите в Google Colab Изучайте код на GitHub Скачайте ноутбук Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). TensorFlow 2.0 включает много изменений API, таких как изменение порядка аргументов, переименование символов и изменение значений по умолчанию для параметров. Ручное исправление всех этих модификаций утомительно и подвержено ошибкам. Чтобы упростить изменения и сделать ваш переход на TF 2.0 как можно более плавным, команда TensorFlow создала утилиту `tf_upgrade_v2`, помогающую перейти от legacy кода к новому API.Примечание: `tf_upgrade_v2` устанавливается автоматически для TensorFlow 1.13 и более поздних версий (включая все сборки TF 2.0).Типичное использование выглядит так:tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txtЭто ускорит процесс обновления за счет конвертации существующих скриптов TensorFlow 1.x Python в TensorFlow 2.0.Скрипт конвертации максимально автоматизирует процесс, но все еще существуют синтаксические и стилистические изменения, которые не могут быть выполнены скриптом. Модули совместимостиНекоторые символы API не могут быть обновлены просто с использованием замены строк. Чтобы гарантировать поддержку вашего кода в TensorFlow 2.0, скрипт обновления включает в себя модуль `compat.v1`. Этот модуль заменяет символы TF 1.x, такие как `tf.foo`, на эквивалентную ссылку` tf.compat.v1.foo`. Хотя модуль совместимости хорош, мы рекомендуем вам вручную вычитать замены и перенести их на новые API в пространстве имен `tf. *` вместо пространства имен `tf.compat.v1` как можно быстрее.Из-за депрекации модулей TensorFlow 2.x (например, `tf.flags` и` tf.contrib`) некоторые изменения не могут быть обойдены путем переключения на `compat.v1`. Обновление этого кода может потребовать использования дополнительной библиотеки (например, [`absl.flags`] (https://github.com/abseil/abseil-py)) или переключения на пакет в [tenorflow / addons] (http: //www.github.com/tensorflow/addons). Рекомендуемый процесс обновленияОставшаяся часть руководства демонстрирует использование скрипта обновления. Хоть скрипт обновления прост в использовании, очень рекомендуем вам использовать скрипт как часть следующего процесса: 1. ** Модульное тестирование **: убедитесь, что в обновляемом коде имеется набор модульных тестов с разумным охватом. Это код Python, поэтому язык не защитит вас от многих классов ошибок. Также убедитесь, что все ваши зависимости были обновлены до совместимых с TensorFlow 2.0. 1. **Установите TensorFlow 1.14**: Обновите ваш TensorFlow до последней версии TensorFlow 1.x, как минимум 1.14. Она включает финальный API TensorFlow 2.0 в `tf.compat.v2`.1. **Протестируйте с 1.14**: Убедитесь, что ваши модульные тесты проходят на этом этапе. Вы будете повторно запускать их в процессе обновления поэтому важно начать с зеленого цвета.1. **Запустите скрипт обновления**: Запустите `tf_upgrade_v2` на всем дереве исходного кода включая тесты. Это обновит ваш код до формата в котором он использует только символы доступные в TensorFlow 2.0. Устаревшие символы будут доступны с `tf.compat.v1`. Это впоследствии потребует ручного внимания, но не сразу.1. **Запустите ковертированные тесты с TensorFlow 1.14**: Ваш код должен все еще запускаться правильно в TensorFlow 1.14. Запустите снова модульные тесты. Любая ошибка в ваших тестах на этом этапе значит, что в скрипте обновления есть ошибка. [Сообщите нам пожалуйста об этом](https://github.com/tensorflow/tensorflow/issues).1. **Проверьте отчет обновления на наличие предупреждений и ошибок**: Скрипт пишет файл отчета объясняющий все конвертации которые вам нужно перепроверить, или все действия которые нужно совершить вручную. Например: Любые оставшиеся экземпляры contrib требуют ручного удаления. Пожалуйста, обратитесь к [RFC для получения дополнительных инструкций](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Установите TensorFlow 2.0**: В этом месте переключение на TensorFlow 2.0 должно быть безопасно.1. **Протестируйте с `v1.disable_v2_behavior`**: Перезапустите ваши тесты с `v1.disable_v2_behavior ()` в основной функции тестов результаты должны быть те же, что и при запуске под 1.14.1. **Включите V2 Behavior**: Сейчас, когда ваши тесты работают с использованием API v2, вы можете начать смотреть включение v2 behavior. В зависимости от того, как написан ваш код, это может потребовать некоторых изменений. См. [Руководство по миграции] (migrate.ipynb) для деталей. Использование скрипта обновления УстановкаПеред началом убедитесь, что TensorlFlow 2.0 установлен.
###Code
try:
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Склонируйте git репозиторий [tensorflow/models](https://github.com/tensorflow/models) чтобы у вас был какой-нибудь код для проверки:
###Code
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Прочитайте helpСкрипт должен быть установлен с TensorFlow. Здесь встроенная помощь:
###Code
!tf_upgrade_v2 -h
###Output
_____no_output_____
###Markdown
Пример кода TF1 Здесь простой скрипт TensorFlow 1.0:
###Code
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
###Output
_____no_output_____
###Markdown
С установленным TensorFlow 2.0 он не запускается:
###Code
!(cd models/samples/cookbook/regression && python custom_regression.py)
###Output
_____no_output_____
###Markdown
Отдельный файлСкрипт обновления может быть запущен на отдельном файле Python:
###Code
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
###Output
_____no_output_____
###Markdown
Скрипт выведет ошибки если не сможет найти исправления для кода. Дерево каталогов Типичные проекты, включая этот простой пример, используют более одного файла. Обычно хочется обновить весь пакет, поэтому скрипт может быть также запущен на дереве каталогов:
###Code
# обновить файлы .py и скопировать остальные файлы в outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание на одно замечание по поводу функции `dataset.make_one_shot_iterator`.Сейчас скрипт работает с TensorFlow 2.0:Обратите внимание, что из-за модуля `tf.compat.v1`, сконвертированный скрипт также будет запускаться в TensorFlow 1.14.
###Code
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
###Output
_____no_output_____
###Markdown
Детальный отчетСкрипт также публикует подробный список изменений. В этом примере он нашел одну возможно небезопасную трансформацию и добавил предупреждение в начало файла:
###Code
!head -n 20 tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание вновь на одно замечание о `Dataset.make_one_shot_iterator function`. В остальных случаях результат объяснит причину для нетривиальных изменений:
###Code
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
###Output
_____no_output_____
###Markdown
Вот измененное содержимое файла, обратите внимание, как скрипт добавляет имена аргументов для работы с перемещенными и переименованными аргументами:
###Code
!cat dropout_v2.py
###Output
_____no_output_____
###Markdown
Больший проект может содержать мало ошибок. Например, конвертируеме модель deeplab:
###Code
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
###Output
_____no_output_____
###Markdown
Это сгенерировало выходные файлы:
###Code
!ls deeplab_v2
###Output
_____no_output_____
###Markdown
Но там были ошибки. Отчет поможет вам точно определить, что нужно исправить, прежде чем запускать скрипт. Вот первые три ошибки:
###Code
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
###Output
_____no_output_____
###Markdown
"Безопасный" режим У скрипт конвертации есть также менее инвазивный `БЕЗОПАСНЫЙ` режим который просто меняет импорты для использования модуля `tensorflow.compat.v1`:
###Code
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Автоматическое обновление кода до TensorFlow 2 Смотрите на TensorFlow.org Запустите в Google Colab Изучайте код на GitHub Скачайте ноутбук Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). TensorFlow 2.0 включает много изменений API, таких как изменение порядка аргументов, переименование символов и изменение значений по умолчанию для параметров. Ручное исправление всех этих модификаций утомительно и подвержено ошибкам. Чтобы упростить изменения и сделать ваш переход на TF 2.0 как можно более плавным, команда TensorFlow создала утилиту `tf_upgrade_v2`, помогающую перейти от legacy кода к новому API.Примечание: `tf_upgrade_v2` устанавливается автоматически для TensorFlow 1.13 и более поздних версий (включая все сборки TF 2.0).Типичное использование выглядит так:tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txtЭто ускорит процесс обновления за счет конвертации существующих скриптов TensorFlow 1.x Python в TensorFlow 2.0.Скрипт конвертации максимально автоматизирует процесс, но все еще существуют синтаксические и стилистические изменения, которые не могут быть выполнены скриптом. Модули совместимостиНекоторые символы API не могут быть обновлены просто с использованием замены строк. Чтобы гарантировать поддержку вашего кода в TensorFlow 2.0, скрипт обновления включает в себя модуль `compat.v1`. Этот модуль заменяет символы TF 1.x, такие как `tf.foo`, на эквивалентную ссылку` tf.compat.v1.foo`. Хотя модуль совместимости хорош, мы рекомендуем вам вручную вычитать замены и перенести их на новые API в пространстве имен `tf. *` вместо пространства имен `tf.compat.v1` как можно быстрее.Из-за депрекации модулей TensorFlow 2.x (например, `tf.flags` и` tf.contrib`) некоторые изменения не могут быть обойдены путем переключения на `compat.v1`. Обновление этого кода может потребовать использования дополнительной библиотеки (например, [`absl.flags`] (https://github.com/abseil/abseil-py)) или переключения на пакет в [tenorflow / addons] (http: //www.github.com/tensorflow/addons). Рекомендуемый процесс обновленияОставшаяся часть руководства демонстрирует использование скрипта обновления. Хоть скрипт обновления прост в использовании, очень рекомендуем вам использовать скрипт как часть следующего процесса: 1. ** Модульное тестирование **: убедитесь, что в обновляемом коде имеется набор модульных тестов с разумным охватом. Это код Python, поэтому язык не защитит вас от многих классов ошибок. Также убедитесь, что все ваши зависимости были обновлены до совместимых с TensorFlow 2.0. 1. **Установите TensorFlow 1.14**: Обновите ваш TensorFlow до последней версии TensorFlow 1.x, как минимум 1.14. Она включает финальный API TensorFlow 2.0 в `tf.compat.v2`.1. **Протестируйте с 1.14**: Убедитесь, что ваши модульные тесты проходят на этом этапе. Вы будете повторно запускать их в процессе обновления поэтому важно начать с зеленого цвета.1. **Запустите скрипт обновления**: Запустите `tf_upgrade_v2` на всем дереве исходного кода включая тесты. Это обновит ваш код до формата в котором он использует только символы доступные в TensorFlow 2.0. Устаревшие символы будут доступны с `tf.compat.v1`. Это впоследствии потребует ручного внимания, но не сразу.1. **Запустите ковертированные тесты с TensorFlow 1.14**: Ваш код должен все еще запускаться правильно в TensorFlow 1.14. Запустите снова модульные тесты. Любая ошибка в ваших тестах на этом этапе значит, что в скрипте обновления есть ошибка. [Сообщите нам пожалуйста об этом](https://github.com/tensorflow/tensorflow/issues).1. **Проверьте отчет обновления на наличие предупреждений и ошибок**: Скрипт пишет файл отчета объясняющий все конвертации которые вам нужно перепроверить, или все действия которые нужно совершить вручную. Например: Любые оставшиеся экземпляры contrib требуют ручного удаления. Пожалуйста, обратитесь к [RFC для получения дополнительных инструкций](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Установите TensorFlow 2.0**: В этом месте переключение на TensorFlow 2.0 должно быть безопасно.1. **Протестируйте с `v1.disable_v2_behavior`**: Перезапустите ваши тесты с `v1.disable_v2_behavior ()` в основной функции тестов результаты должны быть те же, что и при запуске под 1.14.1. **Включите V2 Behavior**: Сейчас, когда ваши тесты работают с использованием API v2, вы можете начать смотреть включение v2 behavior. В зависимости от того, как написан ваш код, это может потребовать некоторых изменений. См. [Руководство по миграции] (migrate.ipynb) для деталей. Использование скрипта обновления УстановкаПеред началом убедитесь, что TensorlFlow 2.0 установлен.
###Code
try:
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Склонируйте git репозиторий [tensorflow/models](https://github.com/tensorflow/models) чтобы у вас был какой-нибудь код для проверки:
###Code
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
###Output
_____no_output_____
###Markdown
Прочитайте helpСкрипт должен быть установлен с TensorFlow. Здесь встроенная помощь:
###Code
!tf_upgrade_v2 -h
###Output
_____no_output_____
###Markdown
Пример кода TF1 Здесь простой скрипт TensorFlow 1.0:
###Code
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
###Output
_____no_output_____
###Markdown
С установленным TensorFlow 2.0 он не запускается:
###Code
!(cd models/samples/cookbook/regression && python custom_regression.py)
###Output
_____no_output_____
###Markdown
Отдельный файлСкрипт обновления может быть запущен на отдельном файле Python:
###Code
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
###Output
_____no_output_____
###Markdown
Скрипт выведет ошибки если не сможет найти исправления для кода. Дерево каталогов Типичные проекты, включая этот простой пример, используют более одного файла. Обычно хочется обновить весь пакет, поэтому скрипт может быть также запущен на дереве каталогов:
###Code
# обновить файлы .py и скопировать остальные файлы в outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание на одно замечание по поводу функции `dataset.make_one_shot_iterator`.Сейчас скрипт работает с TensorFlow 2.0:Обратите внимание, что из-за модуля `tf.compat.v1`, сконвертированный скрипт также будет запускаться в TensorFlow 1.14.
###Code
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
###Output
_____no_output_____
###Markdown
Детальный отчетСкрипт также публикует подробный список изменений. В этом примере он нашел одну возможно небезопасную трансформацию и добавил предупреждение в начало файла:
###Code
!head -n 20 tree_report.txt
###Output
_____no_output_____
###Markdown
Обратите внимание вновь на одно замечание о `Dataset.make_one_shot_iterator function`. В остальных случаях результат объяснит причину для нетривиальных изменений:
###Code
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
###Output
_____no_output_____
###Markdown
Вот измененное содержимое файла, обратите внимание, как скрипт добавляет имена аргументов для работы с перемещенными и переименованными аргументами:
###Code
!cat dropout_v2.py
###Output
_____no_output_____
###Markdown
Больший проект может содержать мало ошибок. Например, конвертируеме модель deeplab:
###Code
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
###Output
_____no_output_____
###Markdown
Это сгенерировало выходные файлы:
###Code
!ls deeplab_v2
###Output
_____no_output_____
###Markdown
Но там были ошибки. Отчет поможет вам точно определить, что нужно исправить, прежде чем запускать скрипт. Вот первые три ошибки:
###Code
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
###Output
_____no_output_____
###Markdown
"Безопасный" режим У скрипт конвертации есть также менее инвазивный `БЕЗОПАСНЫЙ` режим который просто меняет импорты для использования модуля `tensorflow.compat.v1`:
###Code
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
###Output
_____no_output_____ |
notebooks/CNN_2D_MultiLabel_Submit.ipynb | ###Markdown
STEP 1: FUNCTIONS
###Code
class FreeSoundDataset(Dataset):
""" FreeSound dataset."""
# Initialize your data, download, etc.
def __init__(self, X, y):
self.len = X.shape[0]
self.x_data = torch.from_numpy(X)
self.y_data = torch.from_numpy(y)
def __getitem__(self, index):
return (self.x_data[index], self.y_data[index])
def __len__(self):
return self.len
class SubmitFreeSoundDataset(Dataset):
""" FreeSound dataset."""
# Initialize your data, download, etc.
def __init__(self, X):
self.len = X.shape[0]
self.x_data = torch.from_numpy(X)
def __getitem__(self, index):
return (self.x_data[index])
def __len__(self):
return self.len
###Output
_____no_output_____
###Markdown
STEP 2: LOADING DATASET
###Code
X_train = np.load('../data/processed/mel/train_curated_mel128.npy')
X_test = np.load('../data/processed/mel/test_mel128_len200.npy')
X_train = X_train[:, : ,:128]
X_test = X_test[:, : ,:128]
y_train = np.load('../data/processed/y_onehotenc_train_curated.npy')
print('X_train:', X_train.shape)
print('X_test:', X_test.shape)
print('y_train:', y_train.shape)
train_dataset = FreeSoundDataset(X_train, y_train)
test_dataset = SubmitFreeSoundDataset(X_test)
###Output
_____no_output_____
###Markdown
STEP 2: MAKING DATASET ITERABLE
###Code
batch_size = 32
n_iters = 1000
num_epochs = n_iters / (len(train_dataset) / batch_size)
num_epochs = int(num_epochs)
num_epochs
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size= batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
###Output
_____no_output_____
###Markdown
STEP 3: CREATE MODEL CLASS
###Code
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=200, kernel_size=3, stride=1, padding=1)
self.batchnorm1 = nn.BatchNorm2d(200)
self.relu1 = nn.ReLU()
# # Max pool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=200, out_channels=100, kernel_size=3, stride=1, padding=1)
self.batchnorm2 = nn.BatchNorm2d(100)
self.relu2 = nn.ReLU()
# # Max pool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
# Convolution 3
self.cnn3 = nn.Conv2d(in_channels=100, out_channels=100, kernel_size=3, stride=1, padding=1)
self.batchnorm3 = nn.BatchNorm2d(100)
self.relu3 = nn.ReLU()
# # Max pool 3
self.maxpool3 = nn.MaxPool2d(kernel_size=2)
# Fully connected 1 (readout)
self.fc1 = nn.Linear(100 * 16 * 16, 80)
def forward(self, x):
# Convolution 1
out = self.cnn1(x.float())
out = self.batchnorm1(out)
out = self.relu1(out)
# Max pool 1
out = self.maxpool1(out)
# Convolution 2
out = self.cnn2(out)
out = self.batchnorm2(out)
out = self.relu2(out)
# Max pool 2
out = self.maxpool2(out)
# Convolution 3
out = self.cnn3(out)
out = self.batchnorm3(out)
out = self.relu3(out)
# Max pool 3
out = self.maxpool3(out)
# Dropout 1
#out = self.dropout(out)
# Resize
# Original size: (100, 32, 7, 7)
# out.size(0): 100
# New out size: (100, 32*7*7)
out = out.view(out.size(0), -1)
# Linear function (readout)
out = self.fc1(out)
return out
###Output
_____no_output_____
###Markdown
STEP 4: INSTANTIATE MODEL CLASS
###Code
model = CNNModel()
#######################
# USE GPU FOR MODEL #
#######################
if torch.cuda.is_available():
model.cuda()
###Output
_____no_output_____
###Markdown
STEP 5: INSTANTIATE LOSS CLASS
###Code
criterion = nn.MultiLabelSoftMarginLoss()
###Output
_____no_output_____
###Markdown
STEP 6: INSTANTIATE OPTIMIZER CLASS
###Code
learning_rate = 0.01
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
STEP 7: TRAIN THE MODEL
###Code
niter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
#######################
# USE GPU FOR MODEL #
#######################
if torch.cuda.is_available():
images = Variable(images.unsqueeze(1).cuda())
labels = Variable(labels.float().cuda())
else:
images = Variable(images.unsqueeze(1))
labels = Variable(labels)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
#images = images.unsqueeze(1).type(torch.FloatTensor).cuda()
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
niter += 1
if niter % 500 == 0:
print('Iteration: {}. Loss: {}. '.format(niter, loss.data, '\n'))
submit = []
for images in test_loader:
#######################
# USE GPU FOR MODEL #
#######################
if torch.cuda.is_available():
images = Variable(images.unsqueeze(1).cuda())
else:
images = Variable(images.unsqueeze(1))
# Forward pass only to get logits/output
outputs = model(images)
if len(submit):
submit = np.concatenate((submit, outputs.cpu().detach().numpy()), axis=0)
else:
submit = outputs.cpu().detach().numpy()
submit.shape
test = pd.read_csv('../data/raw/sample_submission.csv')
test.head()
submit_final = pd.DataFrame(submit)
submit_final.insert(0, 'fname', test['fname'].values, allow_duplicates = False)
submit_final['fname'] = test['fname']
submit_final.columns = test.columns
submit_final.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
QC Programming/QFT frequency manipulation.ipynb | ###Markdown
**QFT frequency manipulation**
###Code
import numpy as np
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, transpile, Aer, IBMQ, QuantumRegister, ClassicalRegister, execute, BasicAer
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
from qiskit.providers.aer import QasmSimulator
# Loading your IBM Quantum account(s)
provider = IBMQ.load_account()
import math
%matplotlib inline
# Set up the program
signal = QuantumRegister(4, name='signal')
scratch = QuantumRegister(1, name='scratch')
qc = QuantumCircuit(signal, scratch)
def main():
## Prepare a complex sinuisoidal signal
freq = 2;
for i in range(len(signal)):
if (1 << i) & freq:
qc.x(signal[i]);
qc.barrier()
invQFT(signal)
## Move to frequency space with QFT
qc.barrier()
QFT(signal)
## Increase the frequency of signal
qc.barrier()
add_int(signal, 1)
# Move back from frequency space
qc.barrier()
invQFT(signal)
def QFT(qreg):
## This QFT implementation is adapted from IBM's sample:
## https://github.com/Qiskit/qiskit-terra/blob/master/examples/python/qft.py
## ...with a few adjustments to match the book QFT implementation exactly
n = len(qreg)
for j in range(n):
for k in range(j):
qc.cu1(-math.pi/float(2**(j-k)), qreg[n-j-1], qreg[n-k-1])
qc.h(qreg[n-j-1])
# Now finish the QFT by reversing the order of the qubits
for j in range(n//2):
qc.swap(qreg[j], qreg[n-j-1])
def invQFT(qreg):
## This QFT implementation is adapted from IBM's sample:
## https://github.com/Qiskit/qiskit-terra/blob/master/examples/python/qft.py
## ...with a few adjustments to match the book QFT implementation exactly
n = len(qreg)
# Start the inverse QFT by reversing the order of the qubits
for j in range(n//2):
qc.swap(qreg[j], qreg[n-j-1])
n = len(qreg)
for j in range(n):
for k in range(j):
qc.cu1(-math.pi/float(2**(j-k)), qreg[j], qreg[k])
qc.h(qreg[j])
def add_int(qdest, rhs):
reverse_to_subtract = False
if rhs == 0:
return
elif rhs < 0:
rhs = -rhs
reverse_to_subtract = True
ops = []
add_val = int(rhs)
condition_mask = (1 << len(qdest)) - 1
add_val_mask = 1
while add_val_mask <= add_val:
cmask = condition_mask & ~(add_val_mask - 1)
if add_val_mask & add_val:
add_shift_mask = 1 << (len(qdest) - 1)
while add_shift_mask >= add_val_mask:
cmask &= ~add_shift_mask
ops.append((add_shift_mask, cmask))
add_shift_mask >>= 1
condition_mask &= ~add_val_mask
add_val_mask <<= 1
if reverse_to_subtract:
ops.reverse()
for inst in ops:
op_qubits = []
mask = 1
for i in range(len(qdest)):
if inst[1] & (1 << i):
op_qubits.append(qdest[i])
for i in range(len(qdest)):
if inst[0] & (1 << i):
op_qubits.append(qdest[i])
multi_cx(op_qubits)
def multi_cz(qubits):
## This will perform a CCCCCZ on as many qubits as we want,
## as long as we have enough scratch qubits
multi_cx(qubits, do_cz=True)
def multi_cx(qubits, do_cz=False):
## This will perform a CCCCCX with as many conditions as we want,
## as long as we have enough scratch qubits
## The last qubit in the list is the target.
target = qubits[-1]
conds = qubits[:-1]
scratch_index = 0
ops = []
while len(conds) > 2:
new_conds = []
for i in range(len(conds)//2):
ops.append((conds[i * 2], conds[i * 2 + 1], scratch[scratch_index]))
new_conds.append(scratch[scratch_index])
scratch_index += 1
if len(conds) & 1:
new_conds.append(conds[-1])
conds = new_conds
for op in ops:
qc.ccx(op[0], op[1], op[2])
if do_cz:
qc.h(target)
if len(conds) == 0:
qc.x(target)
elif len(conds) == 1:
qc.cx(conds[0], target)
else:
qc.ccx(conds[0], conds[1], target)
if do_cz:
qc.h(target)
ops.reverse()
for op in ops:
qc.ccx(op[0], op[1], op[2])
main()
backend = BasicAer.get_backend('statevector_simulator')
job = execute(qc, backend)
result = job.result()
outputstate = result.get_statevector(qc, decimals=3)
for i,amp in enumerate(outputstate):
if abs(amp) > 0.000001:
prob = abs(amp) * abs(amp)
print('|{}> {} probability = {}%'.format(i, amp, round(prob * 100, 5)))
qc.draw() # draw the circuit
###Output
|0> (0.25-0j) probability = 6.25%
|1> (0.231+0.096j) probability = 6.2577%
|2> (0.177+0.177j) probability = 6.2658%
|3> (0.096+0.231j) probability = 6.2577%
|4> 0.25j probability = 6.25%
|5> (-0.096+0.231j) probability = 6.2577%
|6> (-0.177+0.177j) probability = 6.2658%
|7> (-0.231+0.096j) probability = 6.2577%
|8> (-0.25+0j) probability = 6.25%
|9> (-0.231-0.096j) probability = 6.2577%
|10> (-0.177-0.177j) probability = 6.2658%
|11> (-0.096-0.231j) probability = 6.2577%
|12> (-0-0.25j) probability = 6.25%
|13> (0.096-0.231j) probability = 6.2577%
|14> (0.177-0.177j) probability = 6.2658%
|15> (0.231-0.096j) probability = 6.2577%
|
7.Regression/2.Multiple Regression/Multiple linear regression 2.ipynb | ###Markdown
Multiple Linear Regression
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('../Data.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
dataset.head()
dataset.info()
dataset.describe()
import seaborn as sns
sns.pairplot(dataset)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
prediction = regressor.predict(X_test)
np.set_printoptions(precision=2)
print(np.concatenate((prediction.reshape(len(prediction),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import r2_score
r2_score(y_test, prediction)
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y_test, prediction)
plt.scatter(y_test, prediction)
###Output
_____no_output_____ |
tensorboard_embeddings.ipynb | ###Markdown
Visualize word2vec embeddings in tensorboard
###Code
import warnings
warnings.filterwarnings(action='ignore', category=UserWarning, module='gensim')
import gensim
import tensorflow as tf
import numpy as np
from tensorflow.contrib.tensorboard.plugins import projector
###Output
_____no_output_____
###Markdown
Load saved word2vec model to visualize.
###Code
fname = "w2v_model"
model = gensim.models.keyedvectors.KeyedVectors.load(fname)
# project vocab
max = len(model.wv.vocab)-1
w2v = np.zeros((max,model.layer1_size))
with open("metadata.tsv", 'w+') as file_metadata:
for i,word in enumerate(model.wv.index2word[:max]):
w2v[i] = model.wv[word]
file_metadata.write(word + '\n')
# define the model without training
sess = tf.InteractiveSession()
with tf.device("/cpu:0"):
embedding = tf.Variable(w2v, trainable=False, name='embedding')
tf.global_variables_initializer().run()
path = 'tensorboard/1'
saver = tf.train.Saver()
writer = tf.summary.FileWriter(path, sess.graph)
# adding into projector
config = projector.ProjectorConfig()
embed= config.embeddings.add()
embed.tensor_name = 'embedding'
embed.metadata_path = 'metadata.tsv'
# Specify the width and height of a single thumbnail.
projector.visualize_embeddings(writer, config)
saver.save(sess, path+'/model.ckpt', global_step=max)
# open tensorboard with logdir, check localhost:6006 for viewing your embedding.
# tensorboard --logdir="tensorboard/1"
###Output
_____no_output_____ |
More-DL/pure_Tensorflow_2.0/tensorflow_v2/notebooks/1_Introduction/helloworld.ipynb | ###Markdown
Hello WorldA very simple "hello world" using TensorFlow v2 tensors.- Author: Aymeric Damien- Project: https://github.com/aymericdamien/TensorFlow-Examples/
###Code
import tensorflow as tf
# Create a Tensor.
hello = tf.constant("hello world")
print(hello)
# To access a Tensor value, call numpy().
print(hello.numpy())
###Output
hello world
|
notebooks/medical-fraud-enrich-aliases-to-excel.ipynb | ###Markdown
Getting Data for Medical Fraud Detection The need, in this case, is to get potentially relevant variables to Excel with _intelligible_ column names for follow on analyses for medical fraud detection using Excel.This example will run with Python API version 1.9.1 _if_ you are using a Web GIS, but this will use credits (a LOT) if you are using ArcGIS Online. Once the `enrich` method is added, this will run locally as well.
###Code
from pathlib import Path
import os
from arcgis.features import GeoAccessor
from arcgis.geoenrichment import Country, enrich
from arcgis.gis import GIS
from dotenv import load_dotenv, find_dotenv
import pandas as pd
# paths to common data locations
dir_prj = Path.cwd().parent
dir_data = dir_prj/'data'
dir_int = dir_data/'interim'
# load environment variables from .env
load_dotenv(find_dotenv())
# create a GIS object instance; if you did not enter any information here, it defaults to anonymous access to ArcGIS Online
gis = GIS(
url=os.getenv('ESRI_GIS_URL'),
username=os.getenv('ESRI_GIS_USERNAME'),
password=None if len(os.getenv('ESRI_GIS_PASSWORD')) is 0 else os.getenv('ESRI_GIS_PASSWORD')
)
gis
###Output
_____no_output_____
###Markdown
Create a Country Object InstanceThe starting point is creation of a `Country` object instance to work with.
###Code
cntry = Country('usa', gis=gis)
cntry
###Output
_____no_output_____
###Markdown
Retrieve Standard Geographies for AnalysisStandard geographies can be retrieved from the country object. Especially for the CBSA's, the exact string can be difficult to figure out. Thankfully, the [`standard_geography_query` method](https://developers.arcgis.com/python/api-reference/arcgis.geoenrichment.htmlstandard-geography-query) can be used to search for the exact string to use for retrieving subgeographies.
###Code
zip_dict = cntry.subgeographies.cbsa['Olympia-Lacey-Tumwater,_WA_Metropolitan_Statistical_Area'].zip5
zip_dict
###Output
_____no_output_____
###Markdown
Enrich Get Variables for EnrichmentUsing the filtering capabilities for Pandas data frames, we can quickly create a list of variables to work with. It is useful to note, even though I do not take advantage of it below, the [Pandas `contains`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html) method supports [Python regular expression](https://docs.python.org/3/howto/regex.html) string syntax facilitating very powerful filtering.
###Code
ev = cntry.enrich_variables
sv = ev[
(ev.data_collection == 'Health') # "Health" data collection
& (ev.vintage.str.endswith('2021')) # 2021 variables
& (~ev.alias.str.contains('Avg:')) # Exclude averages
& (~ev.alias.str.contains('Index:')) # Exclude index variables
].drop_duplicates('name').reset_index(drop=True)
sv
###Output
_____no_output_____
###Markdown
Perform EnrichmentEnrichment is as straightforward as running the [`enrich` method](https://developers.arcgis.com/python/api-reference/arcgis.geoenrichment.htmlenrich). Please notice the input for the `analysis_variables` parameter, a list of variable names. In the next release, you will be able to just input the filtered enrichment variables dataframe to make this easier, but for now, we still need to prepare the input for this parameter a bit.
###Code
enrich_df = enrich(zip_dict, analysis_variables=list(sv.name), return_geometry=False, gis=gis)
enrich_df
###Output
_____no_output_____
###Markdown
Add Aliases Create Pandas Series for Alias LookupWe can create a Pandas series enabling easy column alias lookup by removing duplicate names, set the index to the column name, and just keeping the alias column.
###Code
var_lookup = ev.drop_duplicates('name').set_index('name')['alias']
var_lookup
###Output
_____no_output_____
###Markdown
Use Alias List to Look Up Relevant Column Aliases Using a ternerary operator in a list comprehension with the Pandas Series created in the last step enables us to look up aliases if there is a match and keep the existing column name if there is not a match. This enables us to create a list of column names for the output data.
###Code
alias_lst = [var_lookup.loc[c] if c in var_lookup.index else c for c in enrich_df.columns]
alias_lst
###Output
_____no_output_____
###Markdown
Prune ColumnsIf the intention for the output data is for subsequent analysis, it is easier to just have the unique identifier, in this case the zip code, and the enriched columns in the final output. We can create a list of these columns using a list comprehension to filter the column names.
###Code
id_col = 'StdGeographyID'
keep_col_lst = [id_col] + [c for c in enrich_df.columns if c in var_lookup.index]
sel_df = enrich_df.loc[:,keep_col_lst]
sel_df.head()
###Output
_____no_output_____
###Markdown
Apply AliasesUsing the same method as above, we can create a list of aliases. These aliases can then be applied to the output data frame. Also, to faciliate quick retrieval by ID, we can set the index to this ID.
###Code
alias_lst = [var_lookup.loc[c] if c in var_lookup.index else c for c in keep_col_lst]
sel_df.columns = alias_lst
sel_df.set_index('StdGeographyID', inplace=True)
sel_df.head()
###Output
_____no_output_____
###Markdown
Final Product - Export to ExcelFor follow on analysis using Excel, Pandas data frames can easily be saved to Excel.
###Code
sel_df.to_excel(dir_int/'esri_enriched.xlsx')
###Output
_____no_output_____ |
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/02_11/Final/Resampling.ipynb | ###Markdown
Resamplingdocumentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.htmlFor arguments to 'freq' parameter, please see [Offset Aliases](http://pandas.pydata.org/pandas-docs/stable/timeseries.htmloffset-aliases) create a date range to use as an index
###Code
# min: minutes
my_index = pd.date_range('9/1/2016', periods=9, freq='min')
my_index
###Output
_____no_output_____
###Markdown
create a time series that includes a simple pattern
###Code
my_series = pd.Series(np.arange(9), index=my_index)
my_series
###Output
_____no_output_____
###Markdown
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin
###Code
my_series.resample('3min').sum()
###Output
_____no_output_____
###Markdown
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the leftNotice the difference in the time indices; the sum in each bin is the same
###Code
my_series.resample('3min', label='right').sum()
###Output
_____no_output_____
###Markdown
Downsample the series into 3 minute bins as above, but close the right side of the bin interval"count backwards" from end of time series
###Code
my_series.resample('3min', label='right', closed='right').sum()
###Output
_____no_output_____
###Markdown
Upsample the series into 30 second bins[asfreq()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.asfreq.html)
###Code
#select first 5 rows
my_series.resample('30S').asfreq()[0:5]
###Output
_____no_output_____
###Markdown
define a custom function to use with resampling
###Code
def custom_arithmetic(array_like):
temp = 3 * np.sum(array_like) + 5
return temp
###Output
_____no_output_____
###Markdown
apply custom resampling function
###Code
my_series.resample('3min').apply(custom_arithmetic)
###Output
_____no_output_____ |
notebooks/gpt_fr_evaluation.ipynb | ###Markdown
**Copyright 2021 Antoine SIMOULIN.**Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttps://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. Evaluate GPT-fr 🇫🇷 on FLUE**GPT-fr** is a French GPT model for French developped by [Quantmetry](https://www.quantmetry.com/) and the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en).In this notebook, we provide the minimal script to evaluate the model on the FLUE benchmark ([Le et al., 2020a](le-2020-en), [2020b](le-2020-fr)). FLUE aims to better compare and evaluate NLP models for French. If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Tokenizers. We also provice some scripts to download the data and fine-tune the model. The scripts are based on the one provided with [FLUE benchmark](https://github.com/getalp/Flaubert).
###Code
%%capture
!pip install git+https://github.com/huggingface/transformers.git
!pip install tokenizers
!pip install datasets
!test -f download_flue_data.sh || wget https://github.com/AntoineSimoulin/gpt-fr/tree/main/scripts/download_flue_data.sh .
!test -f run_flue.py || wget https://github.com/AntoineSimoulin/gpt-fr/tree/main/scripts/run_flue.py .
!test -f run_flue.py || wget https://github.com/AntoineSimoulin/gpt-fr/tree/main/scripts/spinner.sh .
!chmod +x ./download_flue_data.sh
!chmod +x ./spinner.sh
###Output
_____no_output_____
###Markdown
Requirements
###Code
import torch
import transformers
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# Check GPU is available and libraries version
print('Pytorch version ...............{}'.format(torch.__version__))
print('Transformers version ..........{}'.format(transformers.__version__))
print('GPU available .................{}'.format('\u2705' if torch.cuda.device_count() > 0 else '\u274c'))
print('Available devices .............{}'.format(torch.cuda.device_count()))
print('Active CUDA Device: ...........{}'.format(torch.cuda.current_device()))
print('Current cuda device: ..........{}'.format(torch.cuda.current_device()))
###Output
Pytorch version ...............1.9.0+cu102
Transformers version ..........4.11.0.dev0
GPU available .................✅
Available devices .............1
Active CUDA Device: ...........0
Current cuda device: ..........0
###Markdown
Download and prepare data FLUE includes 6 tasks with various level of difficulty, degree of formality, and amount of training samples:* The Cross Lingual Sentiment (**CLS**) task is a sentiment classification on Amazon reviews. Each subtask (books, dvd, music) is a bonary classification task (positive/negative).* The Cross-lingual Adversarial Dataset for Paraphrase Identification (**PAWSX**) is a paraphrase identification task. The goal is to predict whether the sentences in these pairs are semantically equivalent or not.* The Cross-lingual NLI (**XNLI**) is a natural language inference task given a premise (p) and an hypothese (h), the goal is to determine whether p entails, contradicts or neither entails nor contradicts h.* The **Parsing and Part-of-Speech Tagging** task aims to infer constituency and dependency syntactic trees and part-of-speech tags.* The Word Sense Disambiguation (**WSD**) is a classification taskwhich aims to predict the sense of words in a given context according to a specific sense inventory.
###Code
TASK = 'CLS-Books' #@param ["CLS-Books", "CLS-DVD", "CLS-Music", "PAWSX", "XNLI", "Parsing-Dep", "Parsing-Const", "WSD-Verb", "WSD-Nouns"]
TASK_NAME = TASK.lower().split('-')[0]
# We download all FLUE data. If you want to download all data, please don't use the flag `-t $TASK`
# With `TASK` in "CLS-Books" "CLS-DVD" "CLS-Music" "PAWSX" "XNLI"
# "Parsing-Dep" "Parsing-Const" "WSD-Verb" "WSD-Nouns".
# The Parsing data are under licences which require to create a account
#and need therefore to be manually downloaded.
# Please report to https://dokufarm.phil.hhu.de/spmrl2014/ for instructions
!test -d ./flue_data || mkdir ./flue_data
!./download_flue_data.sh -d ./flue_data -t $TASK
###Output
[34m[1m⣽[m Downloading CLS
[34m[1m⣻[m Preprocessing CLS
###Markdown
Evaluate on FLUE
###Code
MODEL = 'asi/gpt-fr-cased-small' #@param {type:"string"}
MAX_SEQ_LENGTH = 256 #@param {type:"integer"}
#@markdown batch size and learning rate should be separated with "/" for cross validation parameter search.
BATCH_SIZES = 8 #@param {type:"string"}
LEARNING_RATES = 5e-5/3e-5/2e-5/5e-6/1e-6 #@param {type:"string"}
NUM_TRAIN_EPOCHS = 4 #@param {type:"integer"}
#@markdown For the CLS task, the train set size is limited. Standard variation might me high and random seed search might impact results.
N_SEEDS = 5 #@param {type:"integer"}
#@markdown If batch size do not fit into device memory, it is possible to adjust the accumulation step. Final batch size will be equals to `GRAD_ACCUMULATION_STEPS * BATCH_SIZE`.
GRAD_ACCUMULATION_STEPS = 1 #@param {type:"integer"}
!python run_flue.py \
--train_file /content/flue_data/$TASK/train.tsv \
--validation_file /content/flue_data/$TASK/dev.tsv \
--predict_file /content/flue_data/$TASK/test.tsv \
--model_name_or_path $MODEL \
--tokenizer_name $MODEL \
--output_dir /content/flue_data/$TASK \
--max_seq_length $MAX_SEQ_LENGTH \
--do_train \
--do_eval \
--do_predict \
--task_name $TASK_NAME \
--learning_rates 5e-6 \
--batch_sizes $BATCH_SIZES \
--gradient_accumulation_steps $GRAD_ACCUMULATION_STEPS \
--num_train_epochs $NUM_TRAIN_EPOCHS \
--n_seeds $N_SEEDS
###Output
09/06/2021 16:13:20 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
09/06/2021 16:13:20 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=True,
do_train=True,
eval_accumulation_steps=None,
eval_steps=None,
evaluation_strategy=IntervalStrategy.NO,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
gradient_accumulation_steps=1,
greater_is_better=None,
group_by_length=False,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=-1,
log_level_replica=-1,
log_on_each_node=True,
logging_dir=/content/flue_data/CLS-Books/runs/Sep06_16-13-20_9b1263df06e8,
logging_first_step=False,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=4.0,
output_dir=/content/flue_data/CLS-Books,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=8,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=CLS-Books,
push_to_hub_organization=None,
push_to_hub_token=None,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=/content/flue_data/CLS-Books,
save_on_each_node=False,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
)
start hyper-parameters search with : lr: 5e-06 and batch_size: 8 without seed 0
09/06/2021 16:13:20 - WARNING - datasets.builder - Using custom data configuration default-665dcdfc5830c464
Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...
Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff. Subsequent calls will reuse this data.
run_flue.py:272: FutureWarning: cast_ is deprecated and will be removed in the next major version of datasets. Use DatasetDict.cast instead.
'label': ClassLabel(num_classes=2),
100% 1/1 [00:00<00:00, 180.63ba/s]
100% 1/1 [00:00<00:00, 603.41ba/s]
100% 1/1 [00:00<00:00, 172.08ba/s]
09/06/2021 16:13:20 - INFO - filelock - Lock 140647921587088 acquired on /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee.lock
https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp73b00rd3
Downloading: 100% 538/538 [00:00<00:00, 655kB/s]
storing https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
creating metadata file for /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
09/06/2021 16:13:21 - INFO - filelock - Lock 140647921587088 released on /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee.lock
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"finetuning_task": "sst2",
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": false,
"vocab_size": 50000
}
09/06/2021 16:13:21 - INFO - filelock - Lock 140647921316880 acquired on /root/.cache/huggingface/transformers/97d73519e1786bd36d6dab4f2240e77dc8b19cc8535b19f0eb0cc5863d9b6c81.b85636952522fed3a170e2e21a847e912c3a878dedc23912f85546cfa1227f41.lock
https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/vocab.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp55l5_dry
Downloading: 100% 853k/853k [00:00<00:00, 2.75MB/s]
storing https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/vocab.json in cache at /root/.cache/huggingface/transformers/97d73519e1786bd36d6dab4f2240e77dc8b19cc8535b19f0eb0cc5863d9b6c81.b85636952522fed3a170e2e21a847e912c3a878dedc23912f85546cfa1227f41
creating metadata file for /root/.cache/huggingface/transformers/97d73519e1786bd36d6dab4f2240e77dc8b19cc8535b19f0eb0cc5863d9b6c81.b85636952522fed3a170e2e21a847e912c3a878dedc23912f85546cfa1227f41
09/06/2021 16:13:22 - INFO - filelock - Lock 140647921316880 released on /root/.cache/huggingface/transformers/97d73519e1786bd36d6dab4f2240e77dc8b19cc8535b19f0eb0cc5863d9b6c81.b85636952522fed3a170e2e21a847e912c3a878dedc23912f85546cfa1227f41.lock
09/06/2021 16:13:22 - INFO - filelock - Lock 140647921586512 acquired on /root/.cache/huggingface/transformers/025c7f852122770d236ec27f3dd32ac9e1f40679c14a8c4bea80600b7ba0add6.e53643bb177d00116553f4d730afde4d2f8f45c1447a76aa963ba9a0a1b73978.lock
https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/merges.txt not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpkpbe1ugn
Downloading: 100% 513k/513k [00:00<00:00, 1.65MB/s]
storing https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/merges.txt in cache at /root/.cache/huggingface/transformers/025c7f852122770d236ec27f3dd32ac9e1f40679c14a8c4bea80600b7ba0add6.e53643bb177d00116553f4d730afde4d2f8f45c1447a76aa963ba9a0a1b73978
creating metadata file for /root/.cache/huggingface/transformers/025c7f852122770d236ec27f3dd32ac9e1f40679c14a8c4bea80600b7ba0add6.e53643bb177d00116553f4d730afde4d2f8f45c1447a76aa963ba9a0a1b73978
09/06/2021 16:13:23 - INFO - filelock - Lock 140647921586512 released on /root/.cache/huggingface/transformers/025c7f852122770d236ec27f3dd32ac9e1f40679c14a8c4bea80600b7ba0add6.e53643bb177d00116553f4d730afde4d2f8f45c1447a76aa963ba9a0a1b73978.lock
09/06/2021 16:13:23 - INFO - filelock - Lock 140647921545424 acquired on /root/.cache/huggingface/transformers/8bb3968cc09271da6a1adedd33275c0b14b45d7fc81d5ccb6920d4940075b7fe.0f671f161b2dbdaa3a65d346190cb627aac7c67d9c6468ea6a435d7762d446fe.lock
https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/special_tokens_map.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpfultiao6
Downloading: 100% 121/121 [00:00<00:00, 162kB/s]
storing https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/special_tokens_map.json in cache at /root/.cache/huggingface/transformers/8bb3968cc09271da6a1adedd33275c0b14b45d7fc81d5ccb6920d4940075b7fe.0f671f161b2dbdaa3a65d346190cb627aac7c67d9c6468ea6a435d7762d446fe
creating metadata file for /root/.cache/huggingface/transformers/8bb3968cc09271da6a1adedd33275c0b14b45d7fc81d5ccb6920d4940075b7fe.0f671f161b2dbdaa3a65d346190cb627aac7c67d9c6468ea6a435d7762d446fe
09/06/2021 16:13:23 - INFO - filelock - Lock 140647921545424 released on /root/.cache/huggingface/transformers/8bb3968cc09271da6a1adedd33275c0b14b45d7fc81d5ccb6920d4940075b7fe.0f671f161b2dbdaa3a65d346190cb627aac7c67d9c6468ea6a435d7762d446fe.lock
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/97d73519e1786bd36d6dab4f2240e77dc8b19cc8535b19f0eb0cc5863d9b6c81.b85636952522fed3a170e2e21a847e912c3a878dedc23912f85546cfa1227f41
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/025c7f852122770d236ec27f3dd32ac9e1f40679c14a8c4bea80600b7ba0add6.e53643bb177d00116553f4d730afde4d2f8f45c1447a76aa963ba9a0a1b73978
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/special_tokens_map.json from cache at /root/.cache/huggingface/transformers/8bb3968cc09271da6a1adedd33275c0b14b45d7fc81d5ccb6920d4940075b7fe.0f671f161b2dbdaa3a65d346190cb627aac7c67d9c6468ea6a435d7762d446fe
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer_config.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer.json from cache at None
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": true,
"vocab_size": 50000
}
Assigning </s> to the eos_token key of the tokenizer
Assigning <s> to the bos_token key of the tokenizer
Assigning <unk> to the unk_token key of the tokenizer
Assigning <pad> to the pad_token key of the tokenizer
Assigning <mask> to the mask_token key of the tokenizer
09/06/2021 16:13:25 - INFO - filelock - Lock 140647921317968 acquired on /root/.cache/huggingface/transformers/16bd2bed8e0f184b6d447c39c2c4bf64135b888c51056ccb56ae0f0bfd9c12a6.661b4443ec5caefcf86cfc76c9bd77815c311c96907d9e2b32129a2bf079cf2f.lock
https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmplk6b8t04
Downloading: 100% 510M/510M [00:14<00:00, 36.2MB/s]
storing https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/16bd2bed8e0f184b6d447c39c2c4bf64135b888c51056ccb56ae0f0bfd9c12a6.661b4443ec5caefcf86cfc76c9bd77815c311c96907d9e2b32129a2bf079cf2f
creating metadata file for /root/.cache/huggingface/transformers/16bd2bed8e0f184b6d447c39c2c4bf64135b888c51056ccb56ae0f0bfd9c12a6.661b4443ec5caefcf86cfc76c9bd77815c311c96907d9e2b32129a2bf079cf2f
09/06/2021 16:13:39 - INFO - filelock - Lock 140647921317968 released on /root/.cache/huggingface/transformers/16bd2bed8e0f184b6d447c39c2c4bf64135b888c51056ccb56ae0f0bfd9c12a6.661b4443ec5caefcf86cfc76c9bd77815c311c96907d9e2b32129a2bf079cf2f.lock
loading weights file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/16bd2bed8e0f184b6d447c39c2c4bf64135b888c51056ccb56ae0f0bfd9c12a6.661b4443ec5caefcf86cfc76c9bd77815c311c96907d9e2b32129a2bf079cf2f
Some weights of the model checkpoint at asi/gpt-fr-cased-small were not used when initializing GPT2ForSequenceClassification: ['lm_head.weight']
- This IS expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at asi/gpt-fr-cased-small and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
100% 2/2 [00:01<00:00, 1.06ba/s]
100% 1/1 [00:00<00:00, 3.07ba/s]
100% 2/2 [00:01<00:00, 1.24ba/s]
09/06/2021 16:13:48 - INFO - __main__ - Sample 76 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [2874, 3686, 214, 258, 22661, 426, 10948, 272, 13166, 262, 17116, 4791, 307, 260, 35174, 16, 361, 371, 7700, 322, 47139, 748, 389, 528, 219, 357, 214, 5024, 214, 545, 307, 376, 2534, 301, 214, 7945, 748, 32110, 294, 260, 1048, 11081, 748, 1035, 281, 11, 2501, 10140, 249, 3160, 214, 1100, 1356, 18128, 16, 281, 11, 11698, 42699, 810, 628, 14566, 307, 260, 18275, 23254, 748, 21107, 40219, 719, 219, 482, 1224, 1686, 16, 421, 371, 2220, 2373, 6245, 244, 376, 2534, 16, 361, 371, 1133, 322, 3087, 207, 11, 408, 5540, 207, 11, 15398, 294, 1849, 608, 262, 33528, 5124, 16, 27594, 249, 24945, 214, 28536, 450, 608, 310, 44222, 748, 455, 226, 11, 18918, 244, 262, 7945, 249, 487, 357, 20992, 16, 281, 11, 222, 210, 11, 6011, 214, 1247, 239, 12231, 207, 11, 408, 34421, 207, 11, 2693, 6546, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 0, 'sentence': "Les livres de Lignac sont placés au rayon des Grands chefs dans les librairies, il ne faudrait pas abuser ! Il y a plus de photos de lui dans ce livre que de recettes ! Idem pour les autres volumes ! Si j'avais souhaité un album de notre cher Cyril, j'aurai découpé tous ces portraits dans les magazines People ! Jamie Olliver a fait beaucoup mieux, je ne vois aucun intérêt à ce livre, il ne faut pas sortir d'une école d'ingénieur pour savoir faire des crêpes, cuire un pavé de saumon ou faire une purée ! Je m'attendais à des recettes un peu plus originales, j'ai l'impression de voir le menu d'une cantine d'école primaire."}.
09/06/2021 16:13:48 - INFO - __main__ - Sample 1465 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [6554, 533, 27028, 249, 2534, 3047, 207, 11, 35378, 244, 18292, 278, 294, 4019, 246, 10730, 260, 295, 41855, 17, 42, 88, 214, 394, 1094, 16, 371, 4443, 322, 504, 960, 16, 14152, 1769, 17, 211, 249, 1130, 18, 1035, 533, 4869, 10788, 376, 8534, 207, 11, 232, 1284, 214, 2140, 21327, 16, 728, 39999, 1347, 12616, 16, 250, 19591, 307, 239, 3723, 262, 32469, 16, 728, 1849, 1593, 4862, 210, 11, 8154, 208, 502, 16, 5633, 17, 92, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 1, 'sentence': "Si vous cherchez un livre simple d'initiation à Scheme pour comprendre et modifier les Script-Fu de Gimp, ne faites pas comme moi, choisissez-en un autre. Si vous voulez étudier ce langage d'un point de vue théorique, sans allumer votre ordinateur, en entrant dans le détail des algorithmes, sans savoir comment lancer l'interpréteur, allez-y."}.
09/06/2021 16:13:48 - INFO - __main__ - Sample 1540 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [2285, 1921, 14058, 6984, 272, 5203, 24614, 16, 260, 36952, 360, 33737, 533, 9737, 461, 1074, 18, 367, 4736, 4014, 314, 699, 661, 1181, 264, 330, 1113, 207, 11, 2007, 207, 11, 3443, 5542, 24846, 383, 28412, 18, 666, 323, 1003, 6664, 307, 234, 5881, 262, 5238, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 1, 'sentence': "La situation géographique prête au rêve oriental, les senteurs délicates vous ennivrent. Le contexte historique est très interressant sur fond d'histoire d'amour impossible traitée avec délicatesse. On se met facilement dans la peau des personnages."}.
The following columns in the training set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running training *****
Num examples = 1599
Num Epochs = 4
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 800
{'loss': 0.4438, 'learning_rate': 1.8750000000000003e-06, 'epoch': 2.5}
100% 800/800 [05:53<00:00, 2.32it/s]
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 353.4817, 'train_samples_per_second': 18.094, 'train_steps_per_second': 2.263, 'train_loss': 0.3600461959838867, 'epoch': 4.0}
100% 800/800 [05:53<00:00, 2.26it/s]
Saving model checkpoint to /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8
Configuration saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/config.json
Model weights saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/pytorch_model.bin
tokenizer config file saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/tokenizer_config.json
Special tokens file saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/special_tokens_map.json
09/06/2021 16:19:49 - INFO - __main__ - *** Evaluate ***
The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running Evaluation *****
Num examples = 399
Batch size = 8
100% 50/50 [00:07<00:00, 7.13it/s]
The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running Evaluation *****
Num examples = 1999
Batch size = 8
100% 250/250 [00:36<00:00, 6.94it/s]
09/06/2021 16:20:33 - INFO - __main__ - ***** Eval results cls *****
09/06/2021 16:20:33 - INFO - __main__ - eval_loss = 0.4606432616710663
09/06/2021 16:20:33 - INFO - __main__ - eval_accuracy = 0.8621553884711779
09/06/2021 16:20:33 - INFO - __main__ - eval_runtime = 7.398
09/06/2021 16:20:33 - INFO - __main__ - eval_samples_per_second = 53.933
09/06/2021 16:20:33 - INFO - __main__ - eval_steps_per_second = 6.759
09/06/2021 16:20:33 - INFO - __main__ - epoch = 4.0
start hyper-parameters search with : lr: 5e-06 and batch_size: 8 without seed 1
09/06/2021 16:20:33 - WARNING - datasets.builder - Using custom data configuration default-665dcdfc5830c464
09/06/2021 16:20:33 - WARNING - datasets.builder - Reusing dataset csv (/root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff)
09/06/2021 16:20:33 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-1b016ff4da8063ca.arrow
09/06/2021 16:20:33 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-90f985e022facb9b.arrow
09/06/2021 16:20:33 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-3111a60baca4fa5e.arrow
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"finetuning_task": "sst2",
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": false,
"vocab_size": 50000
}
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/97d73519e1786bd36d6dab4f2240e77dc8b19cc8535b19f0eb0cc5863d9b6c81.b85636952522fed3a170e2e21a847e912c3a878dedc23912f85546cfa1227f41
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/025c7f852122770d236ec27f3dd32ac9e1f40679c14a8c4bea80600b7ba0add6.e53643bb177d00116553f4d730afde4d2f8f45c1447a76aa963ba9a0a1b73978
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/special_tokens_map.json from cache at /root/.cache/huggingface/transformers/8bb3968cc09271da6a1adedd33275c0b14b45d7fc81d5ccb6920d4940075b7fe.0f671f161b2dbdaa3a65d346190cb627aac7c67d9c6468ea6a435d7762d446fe
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer_config.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer.json from cache at None
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": true,
"vocab_size": 50000
}
Assigning </s> to the eos_token key of the tokenizer
Assigning <s> to the bos_token key of the tokenizer
Assigning <unk> to the unk_token key of the tokenizer
Assigning <pad> to the pad_token key of the tokenizer
Assigning <mask> to the mask_token key of the tokenizer
loading weights file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/16bd2bed8e0f184b6d447c39c2c4bf64135b888c51056ccb56ae0f0bfd9c12a6.661b4443ec5caefcf86cfc76c9bd77815c311c96907d9e2b32129a2bf079cf2f
Some weights of the model checkpoint at asi/gpt-fr-cased-small were not used when initializing GPT2ForSequenceClassification: ['lm_head.weight']
- This IS expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at asi/gpt-fr-cased-small and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
09/06/2021 16:20:38 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-01eca7454d23239d.arrow
100% 1/1 [00:00<00:00, 1.90ba/s]
100% 2/2 [00:01<00:00, 1.04ba/s]
09/06/2021 16:20:42 - INFO - __main__ - Sample 1309 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [11067, 19027, 214, 2808, 35668, 250, 27911, 503, 3114, 294, 239, 871, 27906, 18, 258, 11, 23161, 314, 6892, 6502, 16, 239, 19532, 1373, 214, 1865, 275, 1179, 244, 234, 532, 599, 301, 3431, 25183, 214, 525, 450, 214, 2013, 234, 46916, 275, 3199, 455, 3799, 4332, 18, 732, 319, 314, 9462, 16, 217, 11, 263, 214, 4019, 210, 11, 2453, 214, 376, 8421, 2663, 686, 35224, 236, 11, 263, 17, 267, 319, 219, 10824, 249, 1462, 728, 3114, 244, 4938, 3242, 249, 6373, 246, 664, 6373, 35224, 310, 4042, 25004, 88, 7290, 748, 367, 357, 41088, 307, 503, 3114, 314, 214, 323, 660, 301, 3431, 207, 11, 443, 546, 30022, 249, 9580, 18, 48423, 515, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 1, 'sentence': "Impossible de rester insensible en lisant cette histoire pour le moins inquiétante. L'intrigue est parfaitement menée, le suspense reste de mise du début à la fin bien que chacun connaisse de prés ou de loin la schizophrénie du Dr Jekyll. Ce qui est intéressant, c'est de comprendre l'origine de ce dédoublement : qu'est-ce qui a amené un homme sans histoire à vouloir devenir un Autre et quel Autre : une véritable monstruosité ! Le plus troublant dans cette histoire est de se dire que chacun d'entre nous renferme un Mr. Hyde..."}.
09/06/2021 16:20:42 - INFO - __main__ - Sample 228 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [6554, 210, 11, 3136, 35859, 314, 13591, 16, 421, 243, 11, 3408, 322, 1887, 234, 2099, 723, 210, 11, 2532, 6778, 376, 2534, 30, 234, 532, 314, 33868, 16, 210, 11, 11316, 262, 5238, 594, 18, 210, 11, 44017, 47260, 214, 503, 1787, 314, 980, 808, 38796, 18, 421, 16918, 214, 239, 4168, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 0, 'sentence': "Si l'idée originelle est excellente, je n'aime pas vraiment la façon dont l'auteur traite ce livre: la fin est décevante, l' évolution des personnages aussi. l' Analyse sociologique de cette société est toute fois pertinente. je conseille de le lire."}.
09/06/2021 16:20:42 - INFO - __main__ - Sample 51 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [50, 11, 263, 322, 16045, 17, 45810, 319, 1561, 18, 367, 998, 17, 32831, 16, 210, 11, 11068, 32085, 16, 503, 2807, 214, 13630, 3308, 8701, 11547, 16, 371, 234, 8733, 322, 319, 1561, 18, 704, 3455, 20208, 371, 234, 8733, 322, 18, 1566, 3560, 3549, 339, 32150, 6287, 13, 314, 249, 968, 1501, 529, 44699, 16, 663, 1328, 260, 5214, 17304, 244, 8296, 16, 663, 260, 9045, 214, 9833, 1907, 1506, 458, 16, 13503, 16, 244, 883, 260, 1064, 2487, 6287, 16, 319, 2168, 833, 249, 487, 239, 1373, 16, 210, 11, 2517, 314, 9604, 551, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 0, 'sentence': "N'est pas anglo-saxon qui veut. Le non-sens, l'humour décalé, cette marque de fabrique so british, ne la maîtrise pas qui veut. Et Martin Page ne la maîtrise pas. Son court roman (120 pages) est un long pensum laborieux, où toutes les idées tombent à plat, où les essais de poésie font flop, bref, à part les quelques dernières pages, qui sauvent un peu le reste, l'ensemble est oubliable."}.
The following columns in the training set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running training *****
Num examples = 1599
Num Epochs = 4
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 800
{'loss': 0.3785, 'learning_rate': 1.8750000000000003e-06, 'epoch': 2.5}
100% 800/800 [05:58<00:00, 2.31it/s]
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 358.2236, 'train_samples_per_second': 17.855, 'train_steps_per_second': 2.233, 'train_loss': 0.3178840732574463, 'epoch': 4.0}
100% 800/800 [05:58<00:00, 2.23it/s]
Saving model checkpoint to /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8
Configuration saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/config.json
Model weights saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/pytorch_model.bin
tokenizer config file saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/tokenizer_config.json
Special tokens file saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/special_tokens_map.json
09/06/2021 16:26:42 - INFO - __main__ - *** Evaluate ***
The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running Evaluation *****
Num examples = 399
Batch size = 8
100% 50/50 [00:07<00:00, 7.08it/s]
The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running Evaluation *****
Num examples = 1999
Batch size = 8
100% 250/250 [00:35<00:00, 6.96it/s]
09/06/2021 16:27:25 - INFO - __main__ - ***** Eval results cls *****
09/06/2021 16:27:25 - INFO - __main__ - eval_loss = 0.5335955619812012
09/06/2021 16:27:25 - INFO - __main__ - eval_accuracy = 0.8822055137844611
09/06/2021 16:27:25 - INFO - __main__ - eval_runtime = 7.1893
09/06/2021 16:27:25 - INFO - __main__ - eval_samples_per_second = 55.499
09/06/2021 16:27:25 - INFO - __main__ - eval_steps_per_second = 6.955
09/06/2021 16:27:25 - INFO - __main__ - epoch = 4.0
start hyper-parameters search with : lr: 5e-06 and batch_size: 8 without seed 2
09/06/2021 16:27:26 - WARNING - datasets.builder - Using custom data configuration default-665dcdfc5830c464
09/06/2021 16:27:26 - WARNING - datasets.builder - Reusing dataset csv (/root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff)
09/06/2021 16:27:26 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-1b016ff4da8063ca.arrow
09/06/2021 16:27:26 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-90f985e022facb9b.arrow
09/06/2021 16:27:26 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-3111a60baca4fa5e.arrow
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"finetuning_task": "sst2",
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": false,
"vocab_size": 50000
}
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/97d73519e1786bd36d6dab4f2240e77dc8b19cc8535b19f0eb0cc5863d9b6c81.b85636952522fed3a170e2e21a847e912c3a878dedc23912f85546cfa1227f41
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/025c7f852122770d236ec27f3dd32ac9e1f40679c14a8c4bea80600b7ba0add6.e53643bb177d00116553f4d730afde4d2f8f45c1447a76aa963ba9a0a1b73978
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/special_tokens_map.json from cache at /root/.cache/huggingface/transformers/8bb3968cc09271da6a1adedd33275c0b14b45d7fc81d5ccb6920d4940075b7fe.0f671f161b2dbdaa3a65d346190cb627aac7c67d9c6468ea6a435d7762d446fe
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer_config.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer.json from cache at None
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": true,
"vocab_size": 50000
}
Assigning </s> to the eos_token key of the tokenizer
Assigning <s> to the bos_token key of the tokenizer
Assigning <unk> to the unk_token key of the tokenizer
Assigning <pad> to the pad_token key of the tokenizer
Assigning <mask> to the mask_token key of the tokenizer
loading weights file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/16bd2bed8e0f184b6d447c39c2c4bf64135b888c51056ccb56ae0f0bfd9c12a6.661b4443ec5caefcf86cfc76c9bd77815c311c96907d9e2b32129a2bf079cf2f
Some weights of the model checkpoint at asi/gpt-fr-cased-small were not used when initializing GPT2ForSequenceClassification: ['lm_head.weight']
- This IS expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at asi/gpt-fr-cased-small and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
09/06/2021 16:27:30 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-01eca7454d23239d.arrow
09/06/2021 16:27:31 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-97a7a5a5a66c441e.arrow
100% 2/2 [00:02<00:00, 1.05s/ba]
09/06/2021 16:27:34 - INFO - __main__ - Sample 1309 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [11067, 19027, 214, 2808, 35668, 250, 27911, 503, 3114, 294, 239, 871, 27906, 18, 258, 11, 23161, 314, 6892, 6502, 16, 239, 19532, 1373, 214, 1865, 275, 1179, 244, 234, 532, 599, 301, 3431, 25183, 214, 525, 450, 214, 2013, 234, 46916, 275, 3199, 455, 3799, 4332, 18, 732, 319, 314, 9462, 16, 217, 11, 263, 214, 4019, 210, 11, 2453, 214, 376, 8421, 2663, 686, 35224, 236, 11, 263, 17, 267, 319, 219, 10824, 249, 1462, 728, 3114, 244, 4938, 3242, 249, 6373, 246, 664, 6373, 35224, 310, 4042, 25004, 88, 7290, 748, 367, 357, 41088, 307, 503, 3114, 314, 214, 323, 660, 301, 3431, 207, 11, 443, 546, 30022, 249, 9580, 18, 48423, 515, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 1, 'sentence': "Impossible de rester insensible en lisant cette histoire pour le moins inquiétante. L'intrigue est parfaitement menée, le suspense reste de mise du début à la fin bien que chacun connaisse de prés ou de loin la schizophrénie du Dr Jekyll. Ce qui est intéressant, c'est de comprendre l'origine de ce dédoublement : qu'est-ce qui a amené un homme sans histoire à vouloir devenir un Autre et quel Autre : une véritable monstruosité ! Le plus troublant dans cette histoire est de se dire que chacun d'entre nous renferme un Mr. Hyde..."}.
09/06/2021 16:27:34 - INFO - __main__ - Sample 228 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [6554, 210, 11, 3136, 35859, 314, 13591, 16, 421, 243, 11, 3408, 322, 1887, 234, 2099, 723, 210, 11, 2532, 6778, 376, 2534, 30, 234, 532, 314, 33868, 16, 210, 11, 11316, 262, 5238, 594, 18, 210, 11, 44017, 47260, 214, 503, 1787, 314, 980, 808, 38796, 18, 421, 16918, 214, 239, 4168, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 0, 'sentence': "Si l'idée originelle est excellente, je n'aime pas vraiment la façon dont l'auteur traite ce livre: la fin est décevante, l' évolution des personnages aussi. l' Analyse sociologique de cette société est toute fois pertinente. je conseille de le lire."}.
09/06/2021 16:27:34 - INFO - __main__ - Sample 51 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [50, 11, 263, 322, 16045, 17, 45810, 319, 1561, 18, 367, 998, 17, 32831, 16, 210, 11, 11068, 32085, 16, 503, 2807, 214, 13630, 3308, 8701, 11547, 16, 371, 234, 8733, 322, 319, 1561, 18, 704, 3455, 20208, 371, 234, 8733, 322, 18, 1566, 3560, 3549, 339, 32150, 6287, 13, 314, 249, 968, 1501, 529, 44699, 16, 663, 1328, 260, 5214, 17304, 244, 8296, 16, 663, 260, 9045, 214, 9833, 1907, 1506, 458, 16, 13503, 16, 244, 883, 260, 1064, 2487, 6287, 16, 319, 2168, 833, 249, 487, 239, 1373, 16, 210, 11, 2517, 314, 9604, 551, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 0, 'sentence': "N'est pas anglo-saxon qui veut. Le non-sens, l'humour décalé, cette marque de fabrique so british, ne la maîtrise pas qui veut. Et Martin Page ne la maîtrise pas. Son court roman (120 pages) est un long pensum laborieux, où toutes les idées tombent à plat, où les essais de poésie font flop, bref, à part les quelques dernières pages, qui sauvent un peu le reste, l'ensemble est oubliable."}.
The following columns in the training set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running training *****
Num examples = 1599
Num Epochs = 4
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 800
{'loss': 0.3785, 'learning_rate': 1.8750000000000003e-06, 'epoch': 2.5}
100% 800/800 [05:58<00:00, 2.31it/s]
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 358.2534, 'train_samples_per_second': 17.853, 'train_steps_per_second': 2.233, 'train_loss': 0.3178840732574463, 'epoch': 4.0}
100% 800/800 [05:58<00:00, 2.23it/s]
Saving model checkpoint to /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8
Configuration saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8/config.json
Model weights saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8/pytorch_model.bin
tokenizer config file saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8/tokenizer_config.json
Special tokens file saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8/special_tokens_map.json
09/06/2021 16:33:34 - INFO - __main__ - *** Evaluate ***
The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running Evaluation *****
Num examples = 399
Batch size = 8
100% 50/50 [00:07<00:00, 7.10it/s]
The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running Evaluation *****
Num examples = 1999
Batch size = 8
100% 250/250 [00:35<00:00, 6.95it/s]
09/06/2021 16:34:18 - INFO - __main__ - ***** Eval results cls *****
09/06/2021 16:34:18 - INFO - __main__ - eval_loss = 0.5335955619812012
09/06/2021 16:34:18 - INFO - __main__ - eval_accuracy = 0.8822055137844611
09/06/2021 16:34:18 - INFO - __main__ - eval_runtime = 7.178
09/06/2021 16:34:18 - INFO - __main__ - eval_samples_per_second = 55.587
09/06/2021 16:34:18 - INFO - __main__ - eval_steps_per_second = 6.966
09/06/2021 16:34:18 - INFO - __main__ - epoch = 4.0
start hyper-parameters search with : lr: 5e-06 and batch_size: 8 without seed 3
09/06/2021 16:34:18 - WARNING - datasets.builder - Using custom data configuration default-665dcdfc5830c464
09/06/2021 16:34:18 - WARNING - datasets.builder - Reusing dataset csv (/root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff)
09/06/2021 16:34:18 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-1b016ff4da8063ca.arrow
09/06/2021 16:34:18 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-90f985e022facb9b.arrow
09/06/2021 16:34:18 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-3111a60baca4fa5e.arrow
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"finetuning_task": "sst2",
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": false,
"vocab_size": 50000
}
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/97d73519e1786bd36d6dab4f2240e77dc8b19cc8535b19f0eb0cc5863d9b6c81.b85636952522fed3a170e2e21a847e912c3a878dedc23912f85546cfa1227f41
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/025c7f852122770d236ec27f3dd32ac9e1f40679c14a8c4bea80600b7ba0add6.e53643bb177d00116553f4d730afde4d2f8f45c1447a76aa963ba9a0a1b73978
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/special_tokens_map.json from cache at /root/.cache/huggingface/transformers/8bb3968cc09271da6a1adedd33275c0b14b45d7fc81d5ccb6920d4940075b7fe.0f671f161b2dbdaa3a65d346190cb627aac7c67d9c6468ea6a435d7762d446fe
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer_config.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer.json from cache at None
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": true,
"vocab_size": 50000
}
Assigning </s> to the eos_token key of the tokenizer
Assigning <s> to the bos_token key of the tokenizer
Assigning <unk> to the unk_token key of the tokenizer
Assigning <pad> to the pad_token key of the tokenizer
Assigning <mask> to the mask_token key of the tokenizer
loading weights file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/16bd2bed8e0f184b6d447c39c2c4bf64135b888c51056ccb56ae0f0bfd9c12a6.661b4443ec5caefcf86cfc76c9bd77815c311c96907d9e2b32129a2bf079cf2f
Some weights of the model checkpoint at asi/gpt-fr-cased-small were not used when initializing GPT2ForSequenceClassification: ['lm_head.weight']
- This IS expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at asi/gpt-fr-cased-small and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
09/06/2021 16:34:23 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-01eca7454d23239d.arrow
09/06/2021 16:34:24 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-97a7a5a5a66c441e.arrow
09/06/2021 16:34:24 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-7a1a88ef7ad42912.arrow
09/06/2021 16:34:24 - INFO - __main__ - Sample 1309 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [11067, 19027, 214, 2808, 35668, 250, 27911, 503, 3114, 294, 239, 871, 27906, 18, 258, 11, 23161, 314, 6892, 6502, 16, 239, 19532, 1373, 214, 1865, 275, 1179, 244, 234, 532, 599, 301, 3431, 25183, 214, 525, 450, 214, 2013, 234, 46916, 275, 3199, 455, 3799, 4332, 18, 732, 319, 314, 9462, 16, 217, 11, 263, 214, 4019, 210, 11, 2453, 214, 376, 8421, 2663, 686, 35224, 236, 11, 263, 17, 267, 319, 219, 10824, 249, 1462, 728, 3114, 244, 4938, 3242, 249, 6373, 246, 664, 6373, 35224, 310, 4042, 25004, 88, 7290, 748, 367, 357, 41088, 307, 503, 3114, 314, 214, 323, 660, 301, 3431, 207, 11, 443, 546, 30022, 249, 9580, 18, 48423, 515, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 1, 'sentence': "Impossible de rester insensible en lisant cette histoire pour le moins inquiétante. L'intrigue est parfaitement menée, le suspense reste de mise du début à la fin bien que chacun connaisse de prés ou de loin la schizophrénie du Dr Jekyll. Ce qui est intéressant, c'est de comprendre l'origine de ce dédoublement : qu'est-ce qui a amené un homme sans histoire à vouloir devenir un Autre et quel Autre : une véritable monstruosité ! Le plus troublant dans cette histoire est de se dire que chacun d'entre nous renferme un Mr. Hyde..."}.
09/06/2021 16:34:24 - INFO - __main__ - Sample 228 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [6554, 210, 11, 3136, 35859, 314, 13591, 16, 421, 243, 11, 3408, 322, 1887, 234, 2099, 723, 210, 11, 2532, 6778, 376, 2534, 30, 234, 532, 314, 33868, 16, 210, 11, 11316, 262, 5238, 594, 18, 210, 11, 44017, 47260, 214, 503, 1787, 314, 980, 808, 38796, 18, 421, 16918, 214, 239, 4168, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 0, 'sentence': "Si l'idée originelle est excellente, je n'aime pas vraiment la façon dont l'auteur traite ce livre: la fin est décevante, l' évolution des personnages aussi. l' Analyse sociologique de cette société est toute fois pertinente. je conseille de le lire."}.
09/06/2021 16:34:24 - INFO - __main__ - Sample 51 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [50, 11, 263, 322, 16045, 17, 45810, 319, 1561, 18, 367, 998, 17, 32831, 16, 210, 11, 11068, 32085, 16, 503, 2807, 214, 13630, 3308, 8701, 11547, 16, 371, 234, 8733, 322, 319, 1561, 18, 704, 3455, 20208, 371, 234, 8733, 322, 18, 1566, 3560, 3549, 339, 32150, 6287, 13, 314, 249, 968, 1501, 529, 44699, 16, 663, 1328, 260, 5214, 17304, 244, 8296, 16, 663, 260, 9045, 214, 9833, 1907, 1506, 458, 16, 13503, 16, 244, 883, 260, 1064, 2487, 6287, 16, 319, 2168, 833, 249, 487, 239, 1373, 16, 210, 11, 2517, 314, 9604, 551, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 0, 'sentence': "N'est pas anglo-saxon qui veut. Le non-sens, l'humour décalé, cette marque de fabrique so british, ne la maîtrise pas qui veut. Et Martin Page ne la maîtrise pas. Son court roman (120 pages) est un long pensum laborieux, où toutes les idées tombent à plat, où les essais de poésie font flop, bref, à part les quelques dernières pages, qui sauvent un peu le reste, l'ensemble est oubliable."}.
The following columns in the training set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running training *****
Num examples = 1599
Num Epochs = 4
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 800
{'loss': 0.3785, 'learning_rate': 1.8750000000000003e-06, 'epoch': 2.5}
100% 800/800 [05:58<00:00, 2.32it/s]
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 358.0424, 'train_samples_per_second': 17.864, 'train_steps_per_second': 2.234, 'train_loss': 0.3178840732574463, 'epoch': 4.0}
100% 800/800 [05:58<00:00, 2.23it/s]
Saving model checkpoint to /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8/gpt-fr-cased-small/flue/cls/3_5e-06_8
Configuration saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8/gpt-fr-cased-small/flue/cls/3_5e-06_8/config.json
Model weights saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8/gpt-fr-cased-small/flue/cls/3_5e-06_8/pytorch_model.bin
tokenizer config file saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8/gpt-fr-cased-small/flue/cls/3_5e-06_8/tokenizer_config.json
Special tokens file saved in /content/flue_data/CLS-Books/gpt-fr-cased-small/flue/cls/0_5e-06_8/gpt-fr-cased-small/flue/cls/1_5e-06_8/gpt-fr-cased-small/flue/cls/2_5e-06_8/gpt-fr-cased-small/flue/cls/3_5e-06_8/special_tokens_map.json
09/06/2021 16:40:24 - INFO - __main__ - *** Evaluate ***
The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running Evaluation *****
Num examples = 399
Batch size = 8
100% 50/50 [00:07<00:00, 7.07it/s]
The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running Evaluation *****
Num examples = 1999
Batch size = 8
100% 250/250 [00:36<00:00, 6.94it/s]
09/06/2021 16:41:08 - INFO - __main__ - ***** Eval results cls *****
09/06/2021 16:41:08 - INFO - __main__ - eval_loss = 0.5335955619812012
09/06/2021 16:41:08 - INFO - __main__ - eval_accuracy = 0.8822055137844611
09/06/2021 16:41:08 - INFO - __main__ - eval_runtime = 7.2051
09/06/2021 16:41:08 - INFO - __main__ - eval_samples_per_second = 55.377
09/06/2021 16:41:08 - INFO - __main__ - eval_steps_per_second = 6.939
09/06/2021 16:41:08 - INFO - __main__ - epoch = 4.0
start hyper-parameters search with : lr: 5e-06 and batch_size: 8 without seed 4
09/06/2021 16:41:08 - WARNING - datasets.builder - Using custom data configuration default-665dcdfc5830c464
09/06/2021 16:41:08 - WARNING - datasets.builder - Reusing dataset csv (/root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff)
09/06/2021 16:41:08 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-1b016ff4da8063ca.arrow
09/06/2021 16:41:08 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-90f985e022facb9b.arrow
09/06/2021 16:41:08 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-3111a60baca4fa5e.arrow
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"finetuning_task": "sst2",
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": false,
"vocab_size": 50000
}
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/97d73519e1786bd36d6dab4f2240e77dc8b19cc8535b19f0eb0cc5863d9b6c81.b85636952522fed3a170e2e21a847e912c3a878dedc23912f85546cfa1227f41
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/025c7f852122770d236ec27f3dd32ac9e1f40679c14a8c4bea80600b7ba0add6.e53643bb177d00116553f4d730afde4d2f8f45c1447a76aa963ba9a0a1b73978
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/special_tokens_map.json from cache at /root/.cache/huggingface/transformers/8bb3968cc09271da6a1adedd33275c0b14b45d7fc81d5ccb6920d4940075b7fe.0f671f161b2dbdaa3a65d346190cb627aac7c67d9c6468ea6a435d7762d446fe
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer_config.json from cache at None
loading file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/tokenizer.json from cache at None
loading configuration file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/12889df88e5263ed62e20128cbb3bebe828aedfd8480954c73d86108d31431a5.97fc21b74d14a9facfb92cee58e1cc1b6abc510049602d7ae0620e0d4f7eacee
Model config GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 2,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"pad_token_id": 1,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.11.0.dev0",
"use_cache": true,
"vocab_size": 50000
}
Assigning </s> to the eos_token key of the tokenizer
Assigning <s> to the bos_token key of the tokenizer
Assigning <unk> to the unk_token key of the tokenizer
Assigning <pad> to the pad_token key of the tokenizer
Assigning <mask> to the mask_token key of the tokenizer
loading weights file https://huggingface.co/asi/gpt-fr-cased-small/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/16bd2bed8e0f184b6d447c39c2c4bf64135b888c51056ccb56ae0f0bfd9c12a6.661b4443ec5caefcf86cfc76c9bd77815c311c96907d9e2b32129a2bf079cf2f
Some weights of the model checkpoint at asi/gpt-fr-cased-small were not used when initializing GPT2ForSequenceClassification: ['lm_head.weight']
- This IS expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at asi/gpt-fr-cased-small and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
09/06/2021 16:41:13 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-01eca7454d23239d.arrow
09/06/2021 16:41:14 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-97a7a5a5a66c441e.arrow
09/06/2021 16:41:15 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-665dcdfc5830c464/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff/cache-7a1a88ef7ad42912.arrow
09/06/2021 16:41:15 - INFO - __main__ - Sample 1309 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [11067, 19027, 214, 2808, 35668, 250, 27911, 503, 3114, 294, 239, 871, 27906, 18, 258, 11, 23161, 314, 6892, 6502, 16, 239, 19532, 1373, 214, 1865, 275, 1179, 244, 234, 532, 599, 301, 3431, 25183, 214, 525, 450, 214, 2013, 234, 46916, 275, 3199, 455, 3799, 4332, 18, 732, 319, 314, 9462, 16, 217, 11, 263, 214, 4019, 210, 11, 2453, 214, 376, 8421, 2663, 686, 35224, 236, 11, 263, 17, 267, 319, 219, 10824, 249, 1462, 728, 3114, 244, 4938, 3242, 249, 6373, 246, 664, 6373, 35224, 310, 4042, 25004, 88, 7290, 748, 367, 357, 41088, 307, 503, 3114, 314, 214, 323, 660, 301, 3431, 207, 11, 443, 546, 30022, 249, 9580, 18, 48423, 515, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 1, 'sentence': "Impossible de rester insensible en lisant cette histoire pour le moins inquiétante. L'intrigue est parfaitement menée, le suspense reste de mise du début à la fin bien que chacun connaisse de prés ou de loin la schizophrénie du Dr Jekyll. Ce qui est intéressant, c'est de comprendre l'origine de ce dédoublement : qu'est-ce qui a amené un homme sans histoire à vouloir devenir un Autre et quel Autre : une véritable monstruosité ! Le plus troublant dans cette histoire est de se dire que chacun d'entre nous renferme un Mr. Hyde..."}.
09/06/2021 16:41:15 - INFO - __main__ - Sample 228 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [6554, 210, 11, 3136, 35859, 314, 13591, 16, 421, 243, 11, 3408, 322, 1887, 234, 2099, 723, 210, 11, 2532, 6778, 376, 2534, 30, 234, 532, 314, 33868, 16, 210, 11, 11316, 262, 5238, 594, 18, 210, 11, 44017, 47260, 214, 503, 1787, 314, 980, 808, 38796, 18, 421, 16918, 214, 239, 4168, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 0, 'sentence': "Si l'idée originelle est excellente, je n'aime pas vraiment la façon dont l'auteur traite ce livre: la fin est décevante, l' évolution des personnages aussi. l' Analyse sociologique de cette société est toute fois pertinente. je conseille de le lire."}.
09/06/2021 16:41:15 - INFO - __main__ - Sample 51 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [50, 11, 263, 322, 16045, 17, 45810, 319, 1561, 18, 367, 998, 17, 32831, 16, 210, 11, 11068, 32085, 16, 503, 2807, 214, 13630, 3308, 8701, 11547, 16, 371, 234, 8733, 322, 319, 1561, 18, 704, 3455, 20208, 371, 234, 8733, 322, 18, 1566, 3560, 3549, 339, 32150, 6287, 13, 314, 249, 968, 1501, 529, 44699, 16, 663, 1328, 260, 5214, 17304, 244, 8296, 16, 663, 260, 9045, 214, 9833, 1907, 1506, 458, 16, 13503, 16, 244, 883, 260, 1064, 2487, 6287, 16, 319, 2168, 833, 249, 487, 239, 1373, 16, 210, 11, 2517, 314, 9604, 551, 18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 0, 'sentence': "N'est pas anglo-saxon qui veut. Le non-sens, l'humour décalé, cette marque de fabrique so british, ne la maîtrise pas qui veut. Et Martin Page ne la maîtrise pas. Son court roman (120 pages) est un long pensum laborieux, où toutes les idées tombent à plat, où les essais de poésie font flop, bref, à part les quelques dernières pages, qui sauvent un peu le reste, l'ensemble est oubliable."}.
The following columns in the training set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: sentence.
***** Running training *****
Num examples = 1599
Num Epochs = 4
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 800
34% 270/800 [02:01<03:59, 2.21it/s] |
notebooks/B. Alignment statistics.ipynb | ###Markdown
Alignment statisticsThis Jupyter notebook calculates alignment statistics (number of chimeric reads, number of chimeric reads supporting an insertion) for samples in both the ILC and the B-ALL datasets. This calculation is performed seperately from the other notebooks due to the computationally (or more specifially, I/O) intenstive nature of the analyses.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import sys
sys.path.append('../src')
import pandas as pd
###Output
_____no_output_____
###Markdown
SB datasetFirst, we calculate the statistics for the ILC dataset. We do this using the calculate_stats function, which determines the number of reads using flagstat and calculates fusion statistics using the Chimeric.out.junctions output file from STAR-Fusion.
###Code
from concurrent.futures import ProcessPoolExecutor
from pathlib import Path
from imfusion.insertions.aligners.star import read_chimeric_junctions
from nbsupport.util import flagstat
def calculate_stats(base_path, transposon_name, n_cores=1, paired=False):
# Calculate total depth using flagstat.
bam_paths = list(Path(base_path).glob('**/alignment.bam'))
with ProcessPoolExecutor(max_workers=n_cores) as executor:
results = executor.map(flagstat, bam_paths)
flagstats = pd.DataFrame(
dict(zip((fp.parent.name for fp in bam_paths),
(res['count'] for res in results)))).T
if paired:
total = flagstats['both_mapped'] // 2
else:
total = flagstats['mapped'] - flagstats['secondary']
total_reads = (
total.to_frame('total_reads')
.reset_index().rename(columns={'index': 'sample'}))
# Calculate fusion statistics.
fusion_paths = list(Path(base_path).glob('**/_star/Chimeric.out.junction'))
fusion_stats = pd.DataFrame.from_records(
((fp.parent.name,) + star_stats(fp, 'T2onc')
for fp in fusion_paths),
columns=['sample', 'fusion_reads', 'transposon_fusion_reads'])
# Merge statistics per sample.
merged = pd.merge(total_reads, fusion_stats, on='sample', how='left')
# Add ratios.
merged['ratio_reads_fusion'] = merged['fusion_reads'] / merged['total_reads']
merged['ratio_fusions_transposon'] = (
merged['transposon_fusion_reads'] / merged['fusion_reads'])
return merged
def star_stats(file_path, transposon_name):
junctions = read_chimeric_junctions(file_path)
total_reads = junctions.shape[0]
transposon_reads = junctions.loc[(junctions['seqname_a'] == transposon_name) ^
(junctions['seqname_b'] == transposon_name)].shape[0]
return (total_reads, transposon_reads)
sb_stats = calculate_stats('../data/interim/sb/star',
transposon_name='T2onc',
n_cores=20,
paired=False)
sb_stats.head()
sb_stats.mean() * 100
###Output
_____no_output_____
###Markdown
This shows that, on average, 0.1% of the reads in each sample were chimeric reads supporting a potential fusion, of which 0.42% represented a putative gene-transposon fusion.
###Code
(sb_stats
.rename(columns={
'sample': 'Sample',
'total_reads': 'Total reads',
'fusion_reads': 'Fusion reads',
'transposon_fusion_reads': 'Transposon fusion reads',
'ratio_reads_fusion': 'Ratio fusion reads',
'ratio_fusions_transposon': 'Ratio transposon fusions'
})
.to_excel('../reports/supplemental/tables/table_s1_alignment_stats_sb.xlsx', index=False))
###Output
_____no_output_____
###Markdown
B-ALL datasetHere, we perform the same analysis for the B-ALL dataset.
###Code
sanger_stats = calculate_stats('../data/interim/sanger/star',
transposon_name='T2onc',
n_cores=20,
paired=True)
sanger_stats.head()
(sanger_stats
.rename(columns={
'sample': 'Sample',
'total_reads': 'Total pairs',
'fusion_reads': 'Fusion pairs',
'transposon_fusion_reads': 'Transposon fusion pairs',
'ratio_reads_fusion': 'Ratio fusion pairs',
'ratio_fusions_transposon': 'Ratio transposon fusions'
})
.to_excel('../reports/supplemental/tables/table_s4_alignment_stats_sanger.xlsx', index=False))
sanger_stats.mean() * 100
###Output
_____no_output_____ |
notebooks/20210714-gsa-and_or_classifier.ipynb | ###Markdown
Propositional Classifiers: In this notebook I try to design classifiers that act on the set of all blocks and output a tiebreaker binary label having access only to the features given by Henry in the focal students dataset. Those features are:1. Total number of students per block (used for normalization purposes)2. Number of FRL students per block3. Number of AALPI students per block4. Number of FRL and AALPI students per block (i.e. intersection of those)5. Number of FRL or AALPI students per block (i.e. union)The column names available are:1. n2. nFRL and pctFRL3. nAALPI and pctAALPI4. nBoth and pctBoth5. nFocal and pctFocalThe classifiers will evaluate a logical proposition with those features. For example, an "AND" classifier can be of the form:$$ \text{AALPI} \geq 0.5 \quad \text{and} \quad \text{FRL} \geq 0.7 $$This classifier will give an equity tiebreaker to a block if and only if that block has over 50% of its students in the AALPI racial group and over 70% of its students receiving FRL.Currently we do not have a systematic way to think of these types of propositions. But we can evaluate their performance based on false positive and false negative rates. In the case of two parameters (i.e. two numeric comparisons), it is possible to visualize the precision-recall curve. 1. Class Syntax We load the propositional classifier classes from the modelling library:
###Code
from src.d04_modeling import propositional_classifier as pc
###Output
_____no_output_____
###Markdown
RMK: The first classifier will take some extra seconds to be initialized in order for the data to load
###Code
load_data = pc.andClassifier([])
###Output
_____no_output_____
###Markdown
In its most general form, the propositional classifier class takes as an input a list of features that we will be using for evaluation, a list of logical operators ("and" or "or"), and a list of comparisors ($\geq$, $\leq$, =). By default, this comparisor lists is a sequence of $\geq$ since that is the most likely case. The lists must be in the order of the statement we want to construct, and notice that there will always be one less operator than features. For example:
###Code
pc1 = pc.PropositionalClassifier(["pctAALPI", "pctFRL", "nBoth"], ["and", "or"])
pc1.statement
###Output
_____no_output_____
###Markdown
Note that the parameters are not required upon initialization. Rather, the statement is constructed so we can input parameters when doing the predictions. This way we can vary parameters and build precision-recall cruves. Simple and/or classifiers have their own child class, in which we only need to pass the features (and comparisors if not default):
###Code
pc2 = pc.andClassifier(["pctAALPI", "pctFRL"])
pc2.statement
pc3 = pc.orClassifier(["pctAALPI", "pctFRL"])
pc3.statement
###Output
_____no_output_____
###Markdown
Some logical statements need parentheses. Some do not, but we would rather read them with parentheses as that is easier (for example, the first example pc1 is hard to interpret without parenthesis---computer evaluates it in order). Simply pass a tuple of features as an element. Note that operands must still be of the correct length!
###Code
pc4 = pc.PropositionalClassifier(["pctAALPI", ("pctFRL", "pctBoth"), "nBoth"], ["or", "and", "or"])
pc4.statement
###Output
_____no_output_____
###Markdown
Once we have initialized our statement we can use the get_solution_set method with the appropriate parameters to do a round of prediction.
###Code
params1 = [0.5, 0.8, 6] #parameters must match the features passed, in the order. Note the scale.
pred1 = pc1.get_solution_set(params1)
pred1
###Output
_____no_output_____
###Markdown
This index object tells us which blocks receive the tiebreaker. We can visualize the result in the San Francisco map:
###Code
ax = pc1.plot_map(params1)
###Output
_____no_output_____
###Markdown
The confusion matrix of this classifier can be retrieved using the get_confusion_matrix method:
###Code
cm1 = pc1.get_confusion_matrix(params1)
cm1
###Output
_____no_output_____
###Markdown
Using that we can retrive any rate for evaluating purposes. We can get the FPR and FNR for example:
###Code
print("False positive rate is {0:.2f} %".format(100*pc1.fpr(params1)))
print("False negative rate is {0:.2f} %".format(100*pc1.fnr(params1)))
###Output
_____no_output_____
###Markdown
To interpret the above map: we wanted the FRL percentage to be very high (above 80%) and the AALPI percentage to be at least half; OR if there were at least 10 students in a block in the intersection count, the block would be given a tiebreaker regardless of its relative composition. We can see in the map that due to the AALPI criterion having to be satisfied most blocks that received a tiebreaker are in the SouthEast (where racial minorities are more concentrated). This criterion is very restrictive: the false negative rate is super high, meaning that we "missed" a lot of focal students. However, very few non-focal students received an advantage (less that 15%). 2. Exploring the parameter space Ideally we would like to explore several points for the trade-off between FP and FN. In one-dimensional parameter spaces (i.e. only one feature is pased to the classifier, so that we have only one parameter) this can be done via analysis of the ROC curve (similar to precision-recall):
###Code
pc5 = pc.PropositionalClassifier(["pctBoth"], [])
pc5.statement
params_arr = [x for x in np.linspace(0, 1, num=100)]
ROC5_df = pc5.get_roc(params_arr)
ROC5_df
pc5.plot_roc(params_arr)
###Output
_____no_output_____
###Markdown
In two-dimensional parameter spaces (i.e. only two features are pased to the classifier, so that we have only two parameters) this can be done via analysis of two matrices of false positives and false negatives. This would be equivalent to a ROC surface.
###Code
pc2.statement
params_arr2 = [x/10 for x in range(11)]
pc2.plot_heatmap(params_arr2)
###Output
_____no_output_____
###Markdown
An alternative is to fix all but one parameters of the propositions so that we can build a ROC curve. Using: $$ \left(\text{AALPI } \geq 50\% \quad \text{and} \quad \text{FRL } \geq 60\%\right)\quad\text{or}\quad \text{BOTH } (\%)\geq \gamma$$
###Code
pc6 = pc.PropositionalClassifier([("pctAALPI", "pctFRL"), "pctBoth"], ["and", "or"])
pc6.statement
params_6 = [0.5, 0.8, 6]
pc6.plot_map(params_6)
params_arr6 = [[0.5, 0.6, x] for x in np.linspace(0, 1, num=100)]
pc6.plot_roc(params_arr6)
###Output
_____no_output_____ |
notebooks/official/model_monitoring/model_monitoring.ipynb | ###Markdown
Vertex Model Monitoring <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/raw/master/notebooks/official/model_monitoring/model_monitoring.ipynb" Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
import os
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow 2.4.1 and TensorFlow Data Validation (TFDV)")
! pip3 install {USER_FLAG} --quiet --upgrade tensorflow==2.4.1 tensorflow_data_validation[visualization]
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
import os
import random
import sys
import time
# Import required packages.
import numpy as np
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.**
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define some helper functionsRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
print("model output: ", output)
MODEL_ID = output[1].split("/")[-1]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[1].split()[-1][:-1]
print(
f"Model {MODEL_NAME}/{MODEL_ID}/{DEPLOYED_MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}/{ENDPOINT}."
)
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
ENDPOINT = f"projects/mco-mm/locations/us-central1/endpoints/{ENDPOINT_ID}"
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
import pprint as pp
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - the time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - the prediction target column name in training dataset.1. Skew detection threshold - the skew threshold for each feature you want to monitor.1. Prediction drift threshold - the drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Vertex AI Workbench notebooks**, your environment is alreadyauthenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click **Create**. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell. Login to your Google Cloud account and enable AI services
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = "google.colab" in sys.modules
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
# Import required packages.
import os
import random
import sys
import time
import matplotlib.pyplot as plt
import numpy as np
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set ai/region $REGION
os.environ["GOOGLE_CLOUD_PROJECT"] = PROJECT_ID
###Output
_____no_output_____
###Markdown
The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Define some helper functions and data structuresRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
from google.cloud.aiplatform_v1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1.services.job_service import JobServiceClient
from google.cloud.aiplatform_v1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1.types.io import BigQuerySource
from google.cloud.aiplatform_v1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1.types.prediction_service import (
ExplainRequest, PredictRequest)
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input, type="predict"):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
if type == "predict":
obj = PredictRequest
method = client.predict
elif type == "explain":
obj = ExplainRequest
method = client.explain
else:
raise Exception("unsupported request type:" + type)
params = {}
params = json_format.ParseDict(params, Value())
request = obj(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = None
try:
response = method(request)
except Exception as ex:
print(ex)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Generate model metadata for explainable AIRun the following cell to extract metadata from the exported model, which is needed for generating the prediction explanations.
###Code
builder = SavedModelMetadataBuilder(
"gs://mco-mm/churn", outputs_to_explain=["churned_probs"]
)
builder.save_metadata(".")
md = builder.get_metadata()
del md["tags"]
del md["framework"]
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
import json
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-5:latest"
ENDPOINT = "us-central1-aiplatform.googleapis.com"
churn_model_path = "gs://mco-mm/churn"
request_data = {
"model": {
"displayName": "churn",
"artifactUri": churn_model_path,
"containerSpec": {"imageUri": IMAGE},
"explanationSpec": {
"parameters": {"sampledShapleyAttribution": {"pathCount": 5}},
"metadata": md,
},
}
}
with open("request_data.json", "w") as outfile:
json.dump(request_data, outfile)
output = !curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://{ENDPOINT}/v1/projects/{PROJECT_ID}/locations/{REGION}/models:upload \
-d @request_data.json 2>/dev/null
# print(output)
MODEL_ID = output[1].split()[1].split("/")[5]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
# If auto-testing this notebook, wait for model registration
if os.getenv("IS_TESTING"):
time.sleep(300)
###Output
_____no_output_____
###Markdown
This request will return immediately but it spawns an asynchronous task that takes several minutes. Periodically check the Vertex Models page on the Cloud Console and don't continue with this lab until you see your newly created model there. It should like something like this: Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
# print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
print(f"Model deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}.")
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON, along with a pie chart summarizing the results.**Try this now by running the next cell and examine the results.**
###Code
# print(ENDPOINT)
# pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
for i in resp.predictions:
vals = i["churned_values"]
probs = i["churned_probs"]
for i in range(len(vals)):
print(vals[i], probs[i])
plt.pie(probs, labels=vals)
plt.show()
pp.pprint(resp)
except Exception as ex:
print("prediction request failed", ex)
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Run an explanation testWe can also run a test of explainable AI on this endpoint. Run the next cell to send a test explanation request. If everything works as expected, you should receive a response encoding the feature importance of this prediction in a text representation called JSON, along with a bar chart summarizing the results.**Try this now by running the next cell and examine the results.**
###Code
# print(ENDPOINT)
# pp.pprint(DEFAULT_INPUT)
try:
features = []
scores = []
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT, type="explain")
for i in resp.explanations:
for j in i.attributions:
for k in j.feature_attributions:
features.append(k)
scores.append(j.feature_attributions[k])
features = [x for _, x in sorted(zip(scores, features))]
scores = sorted(scores)
fig, ax = plt.subplots()
fig.set_size_inches(9, 9)
ax.barh(features, scores)
fig.show()
# pp.pprint(resp)
except Exception as ex:
print("explanation request failed", ex)
###Output
_____no_output_____
###Markdown
Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds)1. Target field - prediction target column name in training dataset1. Skew detection threshold - skew threshold for each feature you want to monitor1. Prediction drift threshold - drift threshold for each feature you want to monitor1. Attribution Skew detection threshold - feature importance skew threshold1. Attribution Prediction drift threshold - feature importance drift threshold
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_level_start_quickplay:.01" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_level_start_quickplay:.01" # @param {type:"string"}
ATTRIB_SKEW_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
ATTRIB_SKEW_CUSTOM_THRESHOLDS = (
"cnt_level_start_quickplay:.01" # @param {type:"string"}
)
ATTRIB_DRIFT_DEFAULT_THRESHOLDS = (
"country,cnt_user_engagement" # @param {type:"string"}
)
ATTRIB_DRIFT_CUSTOM_THRESHOLDS = (
"cnt_level_start_quickplay:.01" # @param {type:"string"}
)
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
attrib_skew_thresholds = get_thresholds(
ATTRIB_SKEW_DEFAULT_THRESHOLDS, ATTRIB_SKEW_CUSTOM_THRESHOLDS
)
attrib_drift_thresholds = get_thresholds(
ATTRIB_DRIFT_DEFAULT_THRESHOLDS, ATTRIB_DRIFT_CUSTOM_THRESHOLDS
)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds,
attribution_score_skew_thresholds=attrib_skew_thresholds,
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds,
attribution_score_drift_thresholds=attrib_drift_thresholds,
)
explanation_config = ModelMonitoringObjectiveConfig.ExplanationConfig(
enable_feature_attributes=True
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
explanation_config=explanation_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to see an example of the layout of these measurements in Cloud Storage. If you substitute the Cloud Storage URL in your job creation email, you can view the structure and content of the data files for your own monitoring job.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data.- **model_monitoring/job-../feature_attribution_score** - This folder is used to record data relevant to feature attribution calculations. It contains an ongoing summary of feature attribution scores relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. This cell runs two five minute tests, one minute apart, so it should take roughly eleven minutes to complete the test.The first test sends 300 fabricated requests (one per second for five minutes) while perturbing two features of interest (cnt_level_start_quickplay and country) by a factor of two. The second test does the same thing but perturbs the selected feature distributions by a factor of three. By perturbing data in two experiments, we're able to trigger both skew and drift alerts.After running this test, it takes at least an hour to assess and report skew and drift alerts so feel free to proceed with the notebook now and you'll see how to examine the resulting alerts later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
start = 2
end = 3
for multiplier in range(start, end + 1):
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {
"cnt_level_start_quickplay": (
lambda x: x * multiplier,
lambda x: x / multiplier,
)
}
perturb_cat = {"Japan": max(COUNTRY.values()) * multiplier}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
if multiplier < end:
print("sleeping...")
time.sleep(60)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_level_start_quickplay*, *cnt_user_engagement*, and *country* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* and *country* feature attribution values are skewed relative to your training data, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring StatusYou can verify that a given endpoint has an active model monitoring job via the Endpoint summary page: Monitoring AlertsYou can examine the alert details by clicking into the endpoint of interest, and selecting the alerts panel: Feature Value DistributionsYou can also examine the recorded training and production feature distributions by drilling down into a given feature, like this:which yields graphical representations of the feature distrubution during both training and production, like this: Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____
###Markdown
Vertex AI Model Monitoring <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/raw/master/notebooks/official/model_monitoring/model_monitoring.ipynb" Open in GCP Notebooks Open in Colab View on GitHub Overview What is Vertex AI Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
import os
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow 2.4.1 and TensorFlow Data Validation (TFDV)")
! pip3 install {USER_FLAG} --quiet --upgrade tensorflow==2.4.1 tensorflow_data_validation[visualization]
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
import os
import random
import sys
import time
# Import required packages.
import numpy as np
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.**
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define some helper functionsRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
print("model output: ", output)
MODEL_ID = output[1].split("/")[-1]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[1].split()[-1][:-1]
print(
f"Model {MODEL_NAME}/{MODEL_ID}/{DEPLOYED_MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}/{ENDPOINT}."
)
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
ENDPOINT = f"projects/mco-mm/locations/us-central1/endpoints/{ENDPOINT_ID}"
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
import pprint as pp
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - the time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - the prediction target column name in training dataset.1. Skew detection threshold - the skew threshold for each feature you want to monitor.1. Prediction drift threshold - the drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____
###Markdown
Vertex AI Model Monitoring <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/raw/master/notebooks/official/model_monitoring/model_monitoring.ipynb" Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
import os
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow 2.4.1 and TensorFlow Data Validation (TFDV)")
! pip3 install {USER_FLAG} --quiet --upgrade tensorflow==2.4.1 tensorflow_data_validation[visualization]
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
import os
import random
import sys
import time
# Import required packages.
import numpy as np
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.**
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define some helper functionsRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
print("model output: ", output)
MODEL_ID = output[1].split("/")[-1]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[1].split()[-1][:-1]
print(
f"Model {MODEL_NAME}/{MODEL_ID}/{DEPLOYED_MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}/{ENDPOINT}."
)
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
ENDPOINT = f"projects/mco-mm/locations/us-central1/endpoints/{ENDPOINT_ID}"
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
import pprint as pp
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - the time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - the prediction target column name in training dataset.1. Skew detection threshold - the skew threshold for each feature you want to monitor.1. Prediction drift threshold - the drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____
###Markdown
Vertex AI Model Monitoring with Explainable AI Feature Attributions Open in Cloud Notebook Open in Colab View on GitHub Overview What is Vertex AI Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.[Vertex Explainable AI](https://cloud.google.com/vertex-ai/docs/explainable-ai/overview) adds another facet to model monitoring, which we call feature attribution monitoring. Explainable AI enables you to understand the relative contribution of each feature to a resulting prediction. In essence, it assesses the magnitude of each feature's influence.If production traffic differs from training data, or varies substantially over time, **either in terms of model predictions or feature attributions**, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you learn to use the `Vertex AI Model Monitoring` service to detect drift and anomalies in prediction requests from a deployed `Vertex AI Model` resource. This tutorial uses the following Google Cloud ML services:- `Vertex AI Model Monitoring`- `Vertex AI Prediction`- `Vertex AI Model` resource- `Vertex AI Endpoint` resourceThe steps performed include:- Upload a pre-trained model as a `Vertex AI Model` resource.- Create an `Vertex AI Endpoint` resource.- Deploy the `Model` resource to the `Endpoint` resource.- Configure the `Endpoint` resource for model monitoring.- Generate synthetic prediction requests.- Understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature. Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Vertex AI Workbench notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step.**Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook dashboard. Before you begin InstallationInstall the packages required for executing this notebook.
###Code
import os
import pprint as pp
import sys
assert sys.version_info.major == 3, "This notebook requires Python 3."
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") and not os.getenv("VIRTUAL_ENV")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q
# Install Python package dependencies.
print("Installing TensorFlow and TensorFlow Data Validation (TFDV)")
! pip3 install {USER_FLAG} --quiet --upgrade tensorflow tensorflow_data_validation[visualization]
! rm -f /opt/conda/lib/python3.7/site-packages/tensorflow/core/kernels/libtfkernel_sobol_op.so
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
! pip3 install {USER_FLAG} --quiet --upgrade explainable_ai_sdk
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from *Kernel -> Restart Kernel*, or running the following:
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Vertex Model Monitoring Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
import sys
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Google Cloud Notebook requires dependencies to be installed with '--user'
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
# Install Python package dependencies.
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib \
google-auth-httplib2 oauth2client requests \
google-cloud-aiplatform google-cloud-storage==1.32.0
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. Enter your project id in the first line of the cell below.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**Model monitoring is currently supported in regions us-central1, europe-west4, asia-east1, and asia-southeast1. To keep things simple for this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). You can use any supported region, so long as all resources are co-located.**
###Code
# Import globally needed dependencies here, after kernel restart.
import copy
import numpy as np
import os
import pprint as pp
import random
import sys
import time
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define utilitiesRun the following cells to define some utility functions and distributions used later in this notebook. Although these utilities are not critical to understand the main concepts, feel free to expand the cellsin this section if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility imports and constants
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
# This is the default value at which you would like the monitoring function to trigger an alert.
# In other words, this value fine tunes the alerting sensitivity. This threshold can be customized
# on a per feature basis but this is the global default setting.
DEFAULT_THRESHOLD_VALUE = 0.001
# @title Utility functions
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# @title Utility distributions
# This cell containers parameters enabling us to generate realistic test data that closely
# models the feature distributions found in the training data.
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
MODEL_ID = output[1].split("/")[-1]
if _exit_code == 0:
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
else:
print(f"Error creating model: {output}")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
if _exit_code == 0:
print("Endpoint created.")
else:
print(f"Error creating endpoint: {output}")
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[-1].split()[-1][:-1]
if _exit_code == 0:
print(
f"Model {MODEL_NAME}/{MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}."
)
else:
print(f"Error deploying model to endpoint: {output}")
###Output
_____no_output_____
###Markdown
If you already have a deployed endpointYou can reuse your existing endpoint by filling in the value of your endpoint ID in the next cell and running it. **If you've just deployed an endpoint in the previous cell, you should skip this step.**
###Code
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
if ENDPOINT_ID:
ENDPOINT = f"projects/{PROJECT_ID}/locations/us-central1/endpoints/{ENDPOINT_ID}"
print(f"Using endpoint {ENDPOINT}")
else:
print("If you want to reuse an existing endpoint, you must specify the endpoint id above.")
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. User email - The email address to which you would like monitoring alerts sent.1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - The time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - The prediction target column name in training dataset.1. Skew detection threshold - The skew threshold for each feature you want to monitor.1. Prediction drift threshold - The drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "[your-email-address]" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. To do this successfully, you need to specify your alerting thresholds (for both skew and drift), your training data source, and apply those settings to all deployed models on your new endpoint (of which there should only be one at this point).Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
# Set thresholds specifying alerting criteria for training/serving skew and create config object.
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
# Set thresholds specifying alerting criteria for serving drift and create config object.
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
# Specify training dataset source location (used for schema generation).
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
# Aggregate the above settings into a ModelMonitoringObjectiveConfig object and use
# that object to adjust the ModelDeploymentMonitoringObjectiveConfig object.
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
# Find all deployed model ids on the created endpoint and set objectives for each.
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_configs = set_objectives(model_ids, objective_template)
# Create the monitoring job for all deployed models on this endpoint.
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
out = !gcloud ai endpoints undeploy-model $ENDPOINT_ID --deployed-model-id $DEPLOYED_MODEL_ID
if _exit_code == 0:
print("Model undeployed.")
else:
print("Error undeploying model:", out)
out = !gcloud ai endpoints delete $ENDPOINT_ID --quiet
if _exit_code == 0:
print("Endpoint deleted.")
else:
print("Error deleting endpoint:", out)
out = !gcloud ai models delete $MODEL_ID --quiet
if _exit_code == 0:
print("Model deleted.")
else:
print("Error deleting model:", out)
###Output
_____no_output_____
###Markdown
Vertex AI Model Monitoring with Explainable AI Feature Attributions Open in Cloud Notebook Open in Colab View on GitHub Overview What is Vertex AI Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.[Vertex Explainable AI](https://cloud.google.com/vertex-ai/docs/explainable-ai/overview) adds another facet to model monitoring, which we call feature attribution monitoring. Explainable AI enables you to understand the relative contribution of each feature to a resulting prediction. In essence, it assesses the magnitude of each feature's influence.If production traffic differs from training data, or varies substantially over time, **either in terms of model predictions or feature attributions**, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
import os
import pprint as pp
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow and TensorFlow Data Validation (TFDV)")
! pip3 install {USER_FLAG} --quiet --upgrade tensorflow tensorflow_data_validation[visualization]
! rm -f /opt/conda/lib/python3.7/site-packages/tensorflow/core/kernels/libtfkernel_sobol_op.so
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform
! pip3 install {USER_FLAG} --quiet --upgrade explainable_ai_sdk
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
# Import required packages.
import os
import random
import sys
import time
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.**
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
os.environ["GOOGLE_CLOUD_PROJECT"] = PROJECT_ID
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define some helper functions and data structuresRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
from google.cloud.aiplatform_v1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1.services.job_service import JobServiceClient
from google.cloud.aiplatform_v1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1.types.io import BigQuerySource
from google.cloud.aiplatform_v1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1.types.prediction_service import (
ExplainRequest, PredictRequest)
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input, type="predict"):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
if type == "predict":
obj = PredictRequest
method = client.predict
elif type == "explain":
obj = ExplainRequest
method = client.explain
else:
raise Exception("unsupported request type:" + type)
params = {}
params = json_format.ParseDict(params, Value())
request = obj(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = None
try:
response = method(request)
except Exception as ex:
print(ex)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Generate model metadata for explainable AIRun the following cell to extract metadata from the exported model, which is needed for generating the prediction explanations.
###Code
builder = SavedModelMetadataBuilder(
"gs://mco-mm/churn", outputs_to_explain=["churned_probs"]
)
builder.save_metadata(".")
md = builder.get_metadata()
del md["tags"]
del md["framework"]
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
import json
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-5:latest"
ENDPOINT = "us-central1-aiplatform.googleapis.com"
churn_model_path = "gs://mco-mm/churn"
request_data = {
"model": {
"displayName": "churn",
"artifactUri": churn_model_path,
"containerSpec": {"imageUri": IMAGE},
"explanationSpec": {
"parameters": {"sampledShapleyAttribution": {"pathCount": 5}},
"metadata": md,
},
}
}
with open("request_data.json", "w") as outfile:
json.dump(request_data, outfile)
output = !curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://{ENDPOINT}/v1/projects/{PROJECT_ID}/locations/{REGION}/models:upload \
-d @request_data.json 2>/dev/null
# print(output)
MODEL_ID = output[1].split()[1].split("/")[5]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
# If auto-testing this notebook, wait for model registration
if os.getenv("IS_TESTING"):
time.sleep(300)
###Output
_____no_output_____
###Markdown
This request will return immediately but it spawns an asynchronous task that takes several minutes. Periodically check the Vertex Models page on the Cloud Console and don't continue with this lab until you see your newly created model there. It should like something like this: Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
# print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
print(f"Model deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}.")
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON, along with a pie chart summarizing the results.**Try this now by running the next cell and examine the results.**
###Code
# print(ENDPOINT)
# pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
for i in resp.predictions:
vals = i["churned_values"]
probs = i["churned_probs"]
for i in range(len(vals)):
print(vals[i], probs[i])
plt.pie(probs, labels=vals)
plt.show()
pp.pprint(resp)
except Exception as ex:
print("prediction request failed", ex)
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Run an explanation testWe can also run a test of explainable AI on this endpoint. Run the next cell to send a test explanation request. If everything works as expected, you should receive a response encoding the feature importance of this prediction in a text representation called JSON, along with a bar chart summarizing the results.**Try this now by running the next cell and examine the results.**
###Code
# print(ENDPOINT)
# pp.pprint(DEFAULT_INPUT)
try:
features = []
scores = []
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT, type="explain")
for i in resp.explanations:
for j in i.attributions:
for k in j.feature_attributions:
features.append(k)
scores.append(j.feature_attributions[k])
features = [x for _, x in sorted(zip(scores, features))]
scores = sorted(scores)
fig, ax = plt.subplots()
fig.set_size_inches(9, 9)
ax.barh(features, scores)
fig.show()
# pp.pprint(resp)
except Exception as ex:
print("explanation request failed", ex)
###Output
_____no_output_____
###Markdown
Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds)1. Target field - prediction target column name in training dataset1. Skew detection threshold - skew threshold for each feature you want to monitor1. Prediction drift threshold - drift threshold for each feature you want to monitor1. Attribution Skew detection threshold - feature importance skew threshold1. Attribution Prediction drift threshold - feature importance drift threshold
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_level_start_quickplay:.01" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_level_start_quickplay:.01" # @param {type:"string"}
ATTRIB_SKEW_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
ATTRIB_SKEW_CUSTOM_THRESHOLDS = (
"cnt_level_start_quickplay:.01" # @param {type:"string"}
)
ATTRIB_DRIFT_DEFAULT_THRESHOLDS = (
"country,cnt_user_engagement" # @param {type:"string"}
)
ATTRIB_DRIFT_CUSTOM_THRESHOLDS = (
"cnt_level_start_quickplay:.01" # @param {type:"string"}
)
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
attrib_skew_thresholds = get_thresholds(
ATTRIB_SKEW_DEFAULT_THRESHOLDS, ATTRIB_SKEW_CUSTOM_THRESHOLDS
)
attrib_drift_thresholds = get_thresholds(
ATTRIB_DRIFT_DEFAULT_THRESHOLDS, ATTRIB_DRIFT_CUSTOM_THRESHOLDS
)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds,
attribution_score_skew_thresholds=attrib_skew_thresholds,
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds,
attribution_score_drift_thresholds=attrib_drift_thresholds,
)
explanation_config = ModelMonitoringObjectiveConfig.ExplanationConfig(
enable_feature_attributes=True
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
explanation_config=explanation_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to see an example of the layout of these measurements in Cloud Storage. If you substitute the Cloud Storage URL in your job creation email, you can view the structure and content of the data files for your own monitoring job.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data.- **model_monitoring/job-../feature_attribution_score** - This folder is used to record data relevant to feature attribution calculations. It contains an ongoing summary of feature attribution scores relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. This cell runs two five minute tests, one minute apart, so it should take roughly eleven minutes to complete the test.The first test sends 300 fabricated requests (one per second for five minutes) while perturbing two features of interest (cnt_level_start_quickplay and country) by a factor of two. The second test does the same thing but perturbs the selected feature distributions by a factor of three. By perturbing data in two experiments, we're able to trigger both skew and drift alerts.After running this test, it takes at least an hour to assess and report skew and drift alerts so feel free to proceed with the notebook now and you'll see how to examine the resulting alerts later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
start = 2
end = 3
for multiplier in range(start, end + 1):
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {
"cnt_level_start_quickplay": (
lambda x: x * multiplier,
lambda x: x / multiplier,
)
}
perturb_cat = {"Japan": max(COUNTRY.values()) * multiplier}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
if multiplier < end:
print("sleeping...")
time.sleep(60)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_level_start_quickplay*, *cnt_user_engagement*, and *country* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* and *country* feature attribution values are skewed relative to your training data, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring StatusYou can verify that a given endpoint has an active model monitoring job via the Endpoint summary page: Monitoring AlertsYou can examine the alert details by clicking into the endpoint of interest, and selecting the alerts panel: Feature Value DistributionsYou can also examine the recorded training and production feature distributions by drilling down into a given feature, like this:which yields graphical representations of the feature distrubution during both training and production, like this: Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____ |
notebooks/9.0 Case Study 2 - Figuring Our Which Customers May Leave - Churn.ipynb | ###Markdown
Figuring Our Which Customers May Leave - Churn Analysis About our DatasetSource - https://www.kaggle.com/blastchar/telco-customer-churn1. We have customer information for a Telecommunications company2. We've got customer IDs, general customer info, the servies they've subscribed too, type of contract and monthly charges.3. This is a historic customer information so we have a field stating whether that customer has **churnded** **Field Descriptions**- customerID - Customer ID- gender - Whether the customer is a male or a female- SeniorCitizen - Whether the customer is a senior citizen or not (1, 0)- Partner - Whether the customer has a partner or not (Yes, No)- Dependents - Whether the customer has dependents or not (Yes, No)- tenure - Number of months the customer has stayed with the company- PhoneService - Whether the customer has a phone service or not (Yes, No)- MultipleLines - Whether the customer has multiple lines or not (Yes, No, No phone service)- InternetService - Customer’s internet service provider (DSL, Fiber optic, No)- OnlineSecurity - Whether the customer has online security or not (Yes, No, No internet service)- OnlineBackup - Whether the customer has online backup or not (Yes, No, No internet service)- DeviceProtection - Whether the customer has device protection or not (Yes, No, No internet service)- TechSupport - Whether the customer has tech support or not (Yes, No, No internet service)- StreamingTV - Whether the customer has streaming TV or not (Yes, No, No internet service)- StreamingMovies - Whether the customer has streaming movies or not (Yes, No, No internet service)- Contract - The contract term of the customer (Month-to-month, One year, Two year)- PaperlessBilling - Whether the customer has paperless billing or not (Yes, No)- PaymentMethod - The customer’s payment method (Electronic check, Mailed check Bank transfer (automatic), Credit card (automatic))- MonthlyCharges - The amount charged to the customer monthly- TotalCharges - The total amount charged to the customer- Churn - Whether the customer churned or not (Yes or No)***Customer Churn*** - churn is when an existing customer, user, player, subscriber or any kind of return client stops doing business or ends the relationship with a company.**Aim -** is to figure our which customers may likely churn in future
###Code
# Load our data
import pandas as pd
# Uncomment this line if using this notebook locally
#churn_df = pd.read_csv('./data/churn/WA_Fn-UseC_-Telco-Customer-Churn.csv')
file_name = "https://raw.githubusercontent.com/rajeevratan84/datascienceforbusiness/master/WA_Fn-UseC_-Telco-Customer-Churn.csv"
churn_df = pd.read_csv(file_name)
# We use the dataframe name followed by a '.head()' to use the head function to
# preview the first 5 records of the dataframe. If you wanted to preview the first 10, simply
# put dataframe_name.head(10)
churn_df.head()
# Get summary stats on our numeric columns
churn_df.describe()
#List unique values in the dataframe e.g. by doing df['name'] column
churn_df.SeniorCitizen.unique()
# View unique for Tenure, we can see this is a
churn_df.tenure.unique()
len(churn_df.MonthlyCharges.unique())
# Summarize our dataset
print ("Rows : " ,churn_df.shape[0])
print ("Columns : " ,churn_df.shape[1])
print ("\nFeatures : \n" ,churn_df.columns.tolist())
print ("\nMissing values : ", churn_df.isnull().sum().values.sum())
print ("\nUnique values : \n",churn_df.nunique())
churn_df['Churn'].value_counts(sort = False)
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
# Keep a copy incase we need to look at the original dataset in future
churn_df_copy = churn_df.copy()
churn_df_copy.drop(['customerID','MonthlyCharges', 'TotalCharges', 'tenure'], axis=1, inplace=True)
churn_df_copy.head()
# Create a new dataset called summary so that we can summarize our churn data
# Crosstab - Compute a simple cross tabulation of two (or more) factors. By default computes a frequency table of the factors unless an array of values and an aggregation function are passed.
summary = pd.concat([pd.crosstab(churn_df_copy[x], churn_df_copy.Churn) for x in churn_df_copy.columns[:-1]], keys=churn_df_copy.columns[:-1])
summary
# Example of how Cross Tab works
# A crosstabulation (also known as a contingency table) shows the frequency between two variables. This is the default functionality for crosstab if given two columns.
import numpy as np
df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
'B' : ['X', 'Y', 'Z'] * 4,
'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
'D' : np.random.randn(12),
'E' : np.random.randn(12)})
df
# We get the frequency of the categories in B with categories in A
pd.crosstab(index = df.A, columns = df.B)
###Output
_____no_output_____
###Markdown
Let's make a percentage column
###Code
summary['Churn_Percentage'] = summary['Yes'] / (summary['No'] + summary['Yes'])
summary
###Output
_____no_output_____
###Markdown
Visualizations and EDA
###Code
import matplotlib.pyplot as plt # this is used for the plot the graph
import seaborn as sns # used for plot interactive graph.
from pylab import rcParams # Customize Matplotlib plots using rcParams
# Data to plot
labels = churn_df['Churn'].value_counts(sort = True).index
sizes = churn_df['Churn'].value_counts(sort = True)
colors = ["lightblue","red"]
explode = (0.05,0) # explode 1st slice
rcParams['figure.figsize'] = 7,7
# Plot
plt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=90,)
plt.title('Customer Churn Breakdown')
plt.show()
labels
# Create a Volin Plot showing how monthy charges relate to Churn
# We an see that Churned customers tend to be higher paying customers
g = sns.factorplot(x="Churn", y = "MonthlyCharges",data = churn_df, kind="violin", palette = "Pastel1")
# Let's look at Tenure
g = sns.factorplot(x="Churn", y = "tenure",data = churn_df, kind="violin", palette = "Pastel1")
# Correlation plot doesn't end up being too informative
import matplotlib.pyplot as plt
def plot_corr(df,size=10):
'''Function plots a graphical correlation matrix for each pair of columns in the dataframe.
Input:
df: pandas DataFrame
size: vertical and horizontal size of the plot'''
corr = df.corr()
fig, ax = plt.subplots(figsize=(size, size))
ax.legend()
cax = ax.matshow(corr)
fig.colorbar(cax)
plt.xticks(range(len(corr.columns)), corr.columns, rotation='vertical')
plt.yticks(range(len(corr.columns)), corr.columns)
plot_corr(churn_df)
###Output
No handles with labels found to put in legend.
###Markdown
Prepare Data for Machine Learning Classifer
###Code
# Check for empty fields, Note, " " is not Null but a spaced character
len(churn_df[churn_df['TotalCharges'] == " "])
## Drop missing data
churn_df = churn_df[churn_df['TotalCharges'] != " "]
len(churn_df[churn_df['TotalCharges'] == " "])
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
#customer id col
Id_col = ['customerID']
#Target columns
target_col = ["Churn"]
#categorical columns
cat_cols = churn_df.nunique()[churn_df.nunique() < 6].keys().tolist()
cat_cols = [x for x in cat_cols if x not in target_col]
#numerical columns
num_cols = [x for x in churn_df.columns if x not in cat_cols + target_col + Id_col]
#Binary columns with 2 values
bin_cols = churn_df.nunique()[churn_df.nunique() == 2].keys().tolist()
#Columns more than 2 values
multi_cols = [i for i in cat_cols if i not in bin_cols]
#Label encoding Binary columns
le = LabelEncoder()
for i in bin_cols :
churn_df[i] = le.fit_transform(churn_df[i])
#Duplicating columns for multi value columns
churn_df = pd.get_dummies(data = churn_df, columns = multi_cols )
churn_df.head()
len(churn_df.columns)
num_cols
#Scaling Numerical columns
std = StandardScaler()
# Scale data
scaled = std.fit_transform(churn_df[num_cols])
scaled = pd.DataFrame(scaled,columns=num_cols)
#dropping original values merging scaled values for numerical columns
df_telcom_og = churn_df.copy()
churn_df = churn_df.drop(columns = num_cols,axis = 1)
churn_df = churn_df.merge(scaled, left_index=True, right_index=True, how = "left")
#churn_df.info()
churn_df.head()
churn_df.drop(['customerID'], axis=1, inplace=True)
churn_df.head()
churn_df[churn_df.isnull().any(axis=1)]
churn_df = churn_df.dropna()
# Double check that nulls have been removed
churn_df[churn_df.isnull().any(axis=1)]
###Output
_____no_output_____
###Markdown
Modeling
###Code
from sklearn.model_selection import train_test_split
# We remove the label values from our training data
X = churn_df.drop(['Churn'], axis=1).values
# We assigned those label values to our Y dataset
y = churn_df['Churn'].values
# Split it to a 70:30 Ratio Train:Test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
type(X_train)
df_train = pd.DataFrame(X_train)
df_train.head()
print(len(churn_df.columns))
churn_df.columns
churn_df.head()
###Output
_____no_output_____
###Markdown
Fit a Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
model = LogisticRegression()
model.fit(X_train, y_train)
predictions = model.predict(X_test)
score = model.score(X_test, y_test)
print("Accuracy = " + str(score))
print(confusion_matrix(y_test, predictions))
print(classification_report(y_test, predictions))
###Output
Accuracy = 0.7755102040816326
[[1366 179]
[ 294 268]]
precision recall f1-score support
0 0.82 0.88 0.85 1545
1 0.60 0.48 0.53 562
accuracy 0.78 2107
macro avg 0.71 0.68 0.69 2107
weighted avg 0.76 0.78 0.77 2107
###Markdown
Feature Importance using Logistic Regression
###Code
# Let's see what features mattered most i.e. Feature Importance
# We sort on the co-efficients with the largest weights as those impact the resulting output the most
coef = model.coef_[0]
coef = [abs(number) for number in coef]
print(coef)
# Finding and deleting the label column
cols = list(churn_df.columns)
cols.index('Churn')
del cols[6]
cols
# Sorting on Feature Importance
sorted_index = sorted(range(len(coef)), key = lambda k: coef[k], reverse = True)
for idx in sorted_index:
print(cols[idx])
###Output
Contract_Two year
Contract_Month-to-month
InternetService_DSL
PaperlessBilling
OnlineSecurity_Yes
PhoneService
Partner
TechSupport_Yes
PaymentMethod_Credit card (automatic)
OnlineBackup_Yes
InternetService_Fiber optic
PaymentMethod_Bank transfer (automatic)
InternetService_No
OnlineSecurity_No internet service
OnlineBackup_No internet service
DeviceProtection_No internet service
TechSupport_No internet service
StreamingTV_No internet service
StreamingMovies_No internet service
StreamingMovies_No
Contract_One year
MultipleLines_Yes
StreamingTV_No
PaymentMethod_Electronic check
DeviceProtection_Yes
TotalCharges
OnlineSecurity_No
MultipleLines_No
Dependents
MultipleLines_No phone service
SeniorCitizen
TechSupport_No
OnlineBackup_No
tenure
PaymentMethod_Mailed check
DeviceProtection_No
StreamingTV_Yes
MonthlyCharges
gender
StreamingMovies_Yes
###Markdown
Try Random Forests
###Code
# Let's try Random Forests now to see if our resutls get better
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
model_rf = RandomForestClassifier()
model_rf.fit(X_train, y_train)
predictions = model_rf.predict(X_test)
score = model_rf.score(X_test, y_test)
print("Accuracy = " + str(score))
print(confusion_matrix(y_test, predictions))
print(classification_report(y_test, predictions))
###Output
Accuracy = 0.7508305647840532
[[1379 166]
[ 359 203]]
precision recall f1-score support
0 0.79 0.89 0.84 1545
1 0.55 0.36 0.44 562
accuracy 0.75 2107
macro avg 0.67 0.63 0.64 2107
weighted avg 0.73 0.75 0.73 2107
###Markdown
Saving & Loading Models
###Code
import pickle
# save
with open('model.pkl','wb') as f:
pickle.dump(model_rf, f)
# load
with open('model.pkl', 'rb') as f:
loaded_model_rf = pickle.load(f)
predictions = loaded_model_rf.predict(X_test)
###Output
_____no_output_____
###Markdown
Try Deep Learning
###Code
# Use the newest version of TensorFlow 2.0
%tensorflow_version 2.x
# Check to ensure we're using our GPU
import tensorflow as tf
tf.test.gpu_device_name()
# Create a simple model
import tensorflow.keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
model = Sequential()
model.add(Dense(20, kernel_initializer = "uniform",activation = "relu", input_dim=40))
model.add(Dense(1, kernel_initializer = "uniform",activation = "sigmoid"))
model.compile(optimizer= "adam",loss = "binary_crossentropy",metrics = ["accuracy"])
# Display Model Summary and Show Parameters
model.summary()
# Start Training Our Classifier
batch_size = 64
epochs = 25
history = model.fit(X_train,
y_train,
batch_size = batch_size,
epochs = epochs,
verbose = 1,
validation_data = (X_test, y_test))
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
predictions = model.predict(X_test)
predictions = (predictions > 0.5)
print(confusion_matrix(y_test, predictions))
print(classification_report(y_test, predictions))
###Output
[[1369 176]
[ 300 262]]
precision recall f1-score support
0 0.82 0.89 0.85 1545
1 0.60 0.47 0.52 562
accuracy 0.77 2107
macro avg 0.71 0.68 0.69 2107
weighted avg 0.76 0.77 0.76 2107
###Markdown
Saving and Loading our Deep Learning models
###Code
model.save("simple_cnn_25_epochs.h5")
print("Model Saved")
# Load our model
from tensorflow.keras.models import load_model
classifier = load_model('simple_cnn_25_epochs.h5')
###Output
_____no_output_____
###Markdown
Let's try a Deeper Model and Learn to use Checkpoints and Early stopping
###Code
from tensorflow.keras.regularizers import l2
from tensorflow.keras.layers import Dropout
from tensorflow.keras.callbacks import ModelCheckpoint
model2 = Sequential()
# Hidden Layer 1
model2.add(Dense(2000, activation='relu', input_dim=40, kernel_regularizer=l2(0.01)))
model2.add(Dropout(0.3, noise_shape=None, seed=None))
# Hidden Layer 1
model2.add(Dense(1000, activation='relu', input_dim=18, kernel_regularizer=l2(0.01)))
model2.add(Dropout(0.3, noise_shape=None, seed=None))
# Hidden Layer 2
model2.add(Dense(500, activation = 'relu', kernel_regularizer=l2(0.01)))
model2.add(Dropout(0.3, noise_shape=None, seed=None))
model2.add(Dense(1, activation='sigmoid'))
model2.summary()
# Create our checkpoint so that we save each model after each epoch
checkpoint = ModelCheckpoint("deep_model_checkpoint.h5",
monitor="val_loss",
mode="min",
save_best_only = True,
verbose=1)
model2.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Define our earling stoppping criteria
from tensorflow.keras.callbacks import EarlyStopping
earlystop = EarlyStopping(monitor = 'val_loss', # value being monitored for improvement
min_delta = 0, #Abs value and is the min change required before we stop
patience = 2, #Number of epochs we wait before stopping
verbose = 1,
restore_best_weights = True) #keeps the best weigths once stopped
# we put our call backs into a callback list
callbacks = [earlystop, checkpoint]
batch_size = 32
epochs = 10
history = model2.fit(X_train,
y_train,
batch_size = batch_size,
epochs = epochs,
verbose = 1,
# NOTE We are adding our callbacks here
callbacks = callbacks,
validation_data = (X_test, y_test))
score = model2.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
_____no_output_____ |